diff --git a/README.md b/README.md index 6ce41734..cb82a71a 100644 --- a/README.md +++ b/README.md @@ -6,25 +6,31 @@ SAAP docs are built using [MkDocs](https://github.com/mkdocs/mkdocs) which is ba This repository has Github action workflow which checks the quality of the documentation and builds the Dockerfile image on Pull Requests. On a push to the main branch, it will create a GitHub release and push the built Dockerfile image to an image repository. -## Build Dockerfile image and run container +## Build locally + +There are at least three options to get fast continuous feedback during local development: + +1. Build and run the docs using the Dockerfile image +1. Run the commands locally +1. Use Tilt -Build Dockerfile image: +### Build Dockerfile image and run container -```shell +Build Dockerfile test image: + +```bash $ docker build . -t test ``` -Run container: +Run test container: -```shell +```bash $ docker run -p 8080:8080 test ``` Then access the docs on [`localhost:8080`](localhost:8080). -## Build locally - -It is preferred to build and run the docs using the Dockerfile image, however an alternative is to run the commands locally. +### Run commands locally Use [virtualenvwrapper](https://virtualenvwrapper.readthedocs.io/en/latest/install.html) to set up Python virtual environments. @@ -32,18 +38,42 @@ Install [Python 3](https://www.python.org/downloads/). Install mkdocs-material and mermaid plugin: -```sh +```bash $ pip3 install mkdocs-material mkdocs-mermaid2-plugin ``` Finally serve the docs using the built-in web server which is based on Python http server - note that the production build will use Nginx instead: -``` +```bash $ mkdocs serve ``` or -``` +```bash $ python3 -m mkdocs serve -``` \ No newline at end of file +``` + +### QA Checks + +Markdown linting: + +```bash +$ brew install markdownlint-cli +$ markdownlint -c .markdownlint.yaml content +``` + +Spell checking: + +```bash +$ brew install vale +$ vale content +``` + +## Use Tilt + +Install [Tilt](https://docs.tilt.dev/index.html), then run: + +```bash +$ tilt up +``` diff --git a/Tiltfile b/Tiltfile new file mode 100644 index 00000000..77e77d7f --- /dev/null +++ b/Tiltfile @@ -0,0 +1,22 @@ +local_resource('install vale', + cmd='which vale > /dev/null || brew install vale') +local_resource('spell check with vale', + cmd='vale content', + deps='./content/', + resource_deps=['install vale']) + +local_resource('install markdownlint', + cmd='which markdownlint > /dev/null || brew install markdownlint-cli') +local_resource('markdownlint', + cmd='markdownlint -c .markdownlint.yaml content', + deps='./content/', + resource_deps=['install markdownlint']) + +local_resource('build test image', + cmd='docker build -t test .', + deps='./content/', + resource_deps=['spell check with vale', 'markdownlint']) + +local_resource('run test container', + cmd='docker run -d -p 8080:8080 test', + resource_deps=['build test image']) diff --git a/content/about/cloud-providers/overview.md b/content/about/cloud-providers/overview.md index 6211be3c..f3f9ac01 100644 --- a/content/about/cloud-providers/overview.md +++ b/content/about/cloud-providers/overview.md @@ -1,14 +1,15 @@ # Overview -Stakater App Agility Platform (SAAP) is currently supported on following cloud providers: +Stakater App Agility Platform (SAAP) supports all clouds which are based on OpenStack, VMWare or BareMetals: -* [Azure](./azure.md) * [AWS](./aws.md) -* [Google](./gcp.md) +* [Azure](./azure.md) * [Binero](./binero.md) -* [UpCloud](./upcloud.md) -* [Exoscale](./exoscale.md) * [Complior](./complior.md) * [Elastx](./elastx.md) +* [Exoscale](./exoscale.md) +* [GCP](./gcp.md) +* [SafeSpring](./safespring.md) +* [UpCloud](./upcloud.md) -We support all sorts of clouds which are based on OpenStack, VMWare or BareMetals; just drop us an email at [`sales@stakater.com`](mailto:sales@stakater.com) if you would like to include your cloud! +Just drop us an email at [`sales@stakater.com`](mailto:sales@stakater.com) if you would like to partner up with another cloud! diff --git a/content/about/saap-key-differentiators.md b/content/about/saap-key-differentiators.md index 9c13749a..6b556b46 100644 --- a/content/about/saap-key-differentiators.md +++ b/content/about/saap-key-differentiators.md @@ -1,6 +1,6 @@ # Key Differentiators -Stakater App Agility Platform is a true hybrid-cloud enabler. All components of Stakater App Agility Platform use common standards which can run on any cloud service, so it is easy for you to run in a hybrid environment, as well as migrate from one cloud to another. We don’t just run the platform, we enable it for you, giving you substantial Return on your Investment +Stakater App Agility Platform is a true hybrid-cloud enabler. All components of Stakater App Agility Platform use common standards which can run on any cloud service, so it is easy for you to run in a hybrid environment, as well as migrate from one cloud to another. We don't just run the platform, we enable it for you, giving you substantial Return on your Investment - We support infra nodes, i.e. we fully manage nodes that run all managed addons of your choice. - We manage the addons as well; and provide SLA on them. diff --git a/content/about/service-definition/account-management.md b/content/about/service-definition/account-management.md index 402dbb7c..034bbe1a 100644 --- a/content/about/service-definition/account-management.md +++ b/content/about/service-definition/account-management.md @@ -2,14 +2,16 @@ ## Billing +SAAP requires a minimum base cluster purchase with minimum technical requirements specified in [Sizing](../../for-administrators/plan-your-environment/sizing.md). + +Customers can either use their existing cloud infrastructure account to deploy SAAP, or use one of Stakater's partners to create infrastructure. The customer always pays Stakater for the SAAP subscription and pays the cloud provider for the cloud costs. It is the customer's responsibility to pre-purchase or provide compute instances to ensure lower cloud infrastructure costs. + +Billing for SAAP is on a monthly basis, or yearly basis with discounts. + ## Cloud Providers -SAAP is available as a managed service on the following cloud providers: +SAAP is available as a managed platform on the cloud providers listed on the [cloud providers overview](../cloud-providers/overview.md). -- Amazon Web Services (AWS) -- Google Cloud Platform (GCP) -- Azure -- Binero -- UpCloud +## Cluster Creation -## Cluster Provisioning +The administrative section contains information about [cluster creation](../../for-administrators/create-your-cluster.md). diff --git a/content/about/service-definition/artifacts-management.md b/content/about/service-definition/artifacts-management.md deleted file mode 100644 index 1361deab..00000000 --- a/content/about/service-definition/artifacts-management.md +++ /dev/null @@ -1 +0,0 @@ -# Artifacts Management diff --git a/content/about/service-definition/cicd-pipelines.md b/content/about/service-definition/cicd-pipelines.md deleted file mode 100644 index e69de29b..00000000 diff --git a/content/about/service-definition/k8s-management.md b/content/about/service-definition/k8s-management.md deleted file mode 100644 index fdfd3617..00000000 --- a/content/about/service-definition/k8s-management.md +++ /dev/null @@ -1,3 +0,0 @@ -# Kubernetes Management - -Private clusters diff --git a/content/about/service-definition/logging.md b/content/about/service-definition/logging.md index 8cfa3978..c2d68c97 100644 --- a/content/about/service-definition/logging.md +++ b/content/about/service-definition/logging.md @@ -2,14 +2,14 @@ SAAP provides an optional integrated log forwarding to log store. -## Cluster audit logging +## Cluster Audit Logging -Cluster audit logs are available through log store, if the integration is enabled. If the integration is not enabled, you can request the audit logs by opening a support case. +Cluster audit logs are available through log store, if the integration is enabled. If the integration is not enabled, you can request the audit logs by opening a [support ticket](https://support.stakater.com/index.html). -## Application logging +## Application Logging Application logs sent to `STDOUT` are collected by log collector and forwarded to log store through the cluster logging stack, if it is installed. -## Data retention +## Data Retention -By default only 7 days data is kept; and if you want to store for long term then open a support case. +By default only 7 days data is kept; if you want to store data for longer then open a [support ticket](https://support.stakater.com/index.html). diff --git a/content/about/service-definition/monitoring.md b/content/about/service-definition/monitoring.md index 921cacea..cae756a5 100644 --- a/content/about/service-definition/monitoring.md +++ b/content/about/service-definition/monitoring.md @@ -2,14 +2,14 @@ This section provides information about the service definition for SAAP monitoring. -## Cluster metrics +## Cluster Metrics -SAAP instances come with an integrated Prometheus stack for cluster monitoring including CPU, memory, and network-based metrics. This is accessible through the web console. These metrics also allow for horizontal pod autoscaling based on CPU or memory metrics. +SAAP come with an integrated Prometheus stack for cluster monitoring including CPU, memory, and network-based metrics. This is accessible through the SAAP web console. These metrics also allow for horizontal pod autoscaling based on CPU or memory metrics. -## Application monitoring +## Application Monitoring SAAP provides an optional application monitoring stack based on Prometheus to monitor business critical applications. This allows for adding scrape targets in user namespaces. -## Data retention +## Data Retention -By default only 7 days data is kept; and if you want to store for long term then open a support case. +By default only seven (7) days data is kept; if you want to store data for longer then open a [support ticket](https://support.stakater.com/index.html). diff --git a/content/about/service-definition/multitenancy.md b/content/about/service-definition/multitenancy.md deleted file mode 100644 index e69de29b..00000000 diff --git a/content/about/service-definition/networking.md b/content/about/service-definition/networking.md index 0f47adf3..fc4106ba 100644 --- a/content/about/service-definition/networking.md +++ b/content/about/service-definition/networking.md @@ -1,21 +1,43 @@ # Networking -## Custom domains for applications +## Custom Domains for applications -To use a custom hostname for a route, you must update your DNS provider by creating a canonical name (CNAME) record. Your CNAME record should map the OpenShift canonical router hostname to your custom domain. The OpenShift canonical router hostname is shown on the Route Details page after a Route is created. Alternatively, a wildcard CNAME record can be created once to route all subdomains for a given hostname to the cluster’s router. +To use a custom hostname for a route, you must update your DNS provider by creating a canonical name (CNAME) record. Your CNAME record should map the SAAP canonical router hostname to your custom domain. The SAAP canonical router hostname is shown on the Route Details page after a Route is created. Alternatively, a wildcard CNAME record can be created once to route all subdomains for a given hostname to the cluster's router. ## Custom domains for cluster services -Custom domains and subdomains are not available for the platform service routes, for example, the API or web console routes, or for the default application routes. +Custom domains and subdomains for cluster services are available except for the SAAP service routes, for example, the API or web console routes, or for the default application routes. ## Domain validated certificates -SAAP includes TLS security certificates needed for both internal and external services on the cluster. For external routes, there are two, separate TLS wildcard certificates that are provided and installed on each cluster, one for the web console and route default hostnames and the second for the API endpoint. Let’s Encrypt is the certificate authority used for certificates. Routes within the cluster, for example, the internal API endpoint, use TLS certificates signed by the cluster’s built-in certificate authority and require the CA bundle available in every pod for trusting the TLS certificate. +SAAP includes TLS security certificates needed for both internal and external services on the cluster. For external routes, there are two, separate TLS wildcard certificates that are provided and installed on each cluster, one for the web console and route default hostnames and the second for the API endpoint. Let's Encrypt is the certificate authority used for certificates. Routes within the cluster, for example, the internal API endpoint, use TLS certificates signed by the cluster's built-in certificate authority and require the CA bundle available in every pod for trusting the TLS certificate. ## Load-balancers -## Network usage +SAAP is normally created via the installer provisioned infrastructure (IPI) installation method which installs operators that manage load-balancers in the customer cloud, and API load-balancers to the master nodes. Application load-balancers are created as part of creating routers and ingresses. The operators use cloud identities to interact with the cloud providers API to create the load-balancers. + +User-provisioned installation (UPI) method is also possible if extra security is needed and then you must create the API and application ingress load balancing infrastructure separately and before SAAP is installed. + +SAAP has a default router/ingress load-balancer that is the default application load-balancer, denoted by `apps` in the URL. The default load-balancer can be configured in SAAP to be either publicly accessible over the internet, or only privately accessible over a pre-existing private connection. All application routes on the cluster are exposed on this default router load-balancer, including cluster services such as the logging UI, metrics API, and registry. + +SAAP has an optional router/ingress load-balancer that is a secondary application load-balancer, denoted by `apps2` in the URL. The secondary load-balancer can be configured in SAAP to be either publicly accessible over the internet, or only privately accessible over a pre-existing private connection. If a 'Label match' is configured for this router load-balancer, then only application routes matching this label will be exposed on this router load-balancer, otherwise all application routes are also exposed on this router load-balancer. + +SAAP has optional load-balancers for services that can be mapped to a service running on SAAP to enable advanced ingress features, such as non-HTTP/SNI traffic or the use of non-standard ports. Cloud providers may have a quota that limits the number of load-balancers that can be used within each cluster. + +## Network use + +Network use is not monitored, and is billed directly by the cloud provider. ## Cluster ingress +Project administrators can add route annotations for ingress control through IP allow-listing. + +Ingress policies can also be changed by using `NetworkPolicy` objects. + +All cluster ingress traffic goes through the defined load-balancers. Direct access to all nodes is blocked by cloud configuration. + ## Cluster egress + +`EgressNetworkPolicy` objects can control pod egress traffic to prevent or limit outbound traffic in SAAP. + +Public outbound traffic from the control plane and infrastructure nodes is required and necessary to maintain cluster image security and cluster monitoring. This requires the `0.0.0.0/0` route to belong only to the internet gateway. diff --git a/content/about/service-definition/overview.md b/content/about/service-definition/overview.md index 93666750..36b2c7a0 100644 --- a/content/about/service-definition/overview.md +++ b/content/about/service-definition/overview.md @@ -1,20 +1,23 @@ # Overview -This section outlines the service definition for the SAAP: +This section outlines the service definition for Stakater App Agility Platform (SAAP): -1. [Managed Kubernetes (Red Hat OpenShift)](platform.md) -2. [Managed Monitoring Stack (Prometheus, Grafana, Alert Manager)](monitoring.md) -3. [Managed Logging Stack (Fluentd, Vector, ElasticSearch, Kibana)](logging.md) -4. Managed Container Registry (Nexus) -5. Managed Artifacts Store (Nexus) -6. Managed Backup Restore (Velero) -7. [Managed Secrets Management (Vault)](secrets-management.md) -8. Managed Multi-tenancy (MTO) -9. Managed Service Mesh (`Istio`, `Kiali`, `Jagaer`, `Prometheus`) -10. Managed Certs -11. Managed CD (ArgoCD) -12. Managed CI (Tekton) -13. Managed Policy Enforcement (Gatekeeper, OPA) -14. Managed Downtime Alerting (IMC, UptimeRobot) -15. Managed Dynamic Environments (Tronador) -16. Managed Dynamic Application Reload (Reloader) +1. [Managed Kubernetes (Red Hat OpenShift)](./platform.md) +1. [Account Management](./account-management.md) +1. [Storage](./storage.md) +1. [Security](./security.md) +1. [Networking](./networking.md) +1. [Managed Monitoring Stack (Prometheus, Grafana, Alert Manager, UptimeRobot)](./monitoring.md) +1. [Managed Logging Stack (Fluentd, Vector, ElasticSearch, Kibana)](./logging.md) +1. [Managed Container Registry and Artifact Store (Nexus)](../../managed-addons/nexus/overview.md) +1. [Managed Backup Restore (Velero)](../../managed-addons/velero/overview.md) +1. [Managed Secrets Management (Vault)](./secrets-management.md) +1. [Managed Multi-tenancy (MTO)](../../managed-addons/mto/overview.md) +1. [Managed Service Mesh (`Istio`, `Kiali`, `Jagaer`, `Prometheus`)](./service-mesh.md) +1. [Managed Certificates](../../managed-addons/cert-manager/overview.md) +1. [Managed Continuous Delivery (ArgoCD)](../../managed-addons/argocd/overview.md) +1. [Managed Continuous Integration (Tekton)](../../managed-addons/tekton/introduction.md) +1. [Managed Policy Enforcement (Kyverno, OPA)](../../for-cisos/policies/policies.md) +1. [Managed Downtime Alerting (IMC)](../../managed-addons/imc/overview.md) +1. [Managed Dynamic Environments (Tronador)](../../managed-addons/tronador/overview.md) +1. [Managed Dynamic Application Reload (Reloader)](../../managed-addons/reloader/overview.md) diff --git a/content/about/service-definition/platform.md b/content/about/service-definition/platform.md index e14a2767..a6ca86fa 100644 --- a/content/about/service-definition/platform.md +++ b/content/about/service-definition/platform.md @@ -2,11 +2,11 @@ ## Autoscaling -Node autoscaling is available on few clouds; you can find details in the relevant cloud section. You can configure the autoscaler option to automatically scale the number of machines in a cluster. +Node autoscaling is available on few clouds; you can find details in the relevant [cloud section](../cloud-providers/overview.md). You can configure the autoscaler option to automatically scale the number of machines in a cluster. ## Daemonsets -Customers can create and run daemonsets on SAAP. To restrict daemonsets to only running on worker nodes, use the following `nodeSelector`: +Customers can create and run daemonsets on SAAP. To restrict daemonsets to only run on worker nodes, use the following `nodeSelector`: ```yaml ... @@ -16,40 +16,36 @@ spec: ... ``` -## Multiple availability zone +## Multiple Availability Zone In a multiple availability zone cluster, control plane nodes are distributed across availability zones and at least one worker node is required in each availability zone. -## Node labels +## Node Labels Custom node labels are created by Stakater during node creation and cannot be changed on SAAP at this time. However, custom labels are supported when creating new machine pools. -## OpenShift version +## OpenShift Version -SAAP is run as a managed service and is kept up to date with the latest OpenShift Container Platform version. Upgrade scheduling to the latest version is available. +SAAP is run as a managed service and is kept up to date with the latest OpenShift Container Platform version, see [change management in responsibilities](../responsibilities.md#change-management). Upgrade scheduling to the latest version is available. -## Container engine +## Container Engine SAAP runs on OpenShift 4 and uses [CRI-O](https://www.redhat.com/en/blog/red-hat-openshift-container-platform-4-now-defaults-cri-o-underlying-container-engine) as the only available container engine. -## Operating system +## Operating System SAAP runs on OpenShift 4 and uses Red Hat CoreOS as the operating system for all control plane and worker nodes. -## Windows Containers - -Red Hat OpenShift support for Windows Containers is not available on SAAP at this time. - ## Upgrades -Upgrades can be scheduled by opening a [support case](https://support.stakater.com/index.html). +Upgrades can be done either immediately or be scheduled at a specific date by opening a [support ticket](https://support.stakater.com/index.html). -See the [SAAP Life Cycle](../update-lifecycle.md) for more information on the upgrade policy and procedures. +See the [SAAP Update Life Cycle](../update-lifecycle.md) for more information on the upgrade policy and procedures. -## Kubernetes Operator support +## Kubernetes Operator Support -All Operators listed in the Operator Hub marketplace should be available for installation. These operators are considered customer workloads, and are not monitored by Stakater SRE. +All operators listed in the [Operator Hub marketplace](https://operatorhub.io/) should be available for installation. These operators are considered customer workloads, and are not monitored by Stakater SRE, see [customer applications responsibilities](../responsibilities.md#data-and-applications). -## Red Hat Operator support +## Red Hat Operator Support -Red Hat workloads typically refer to Red Hat-provided Operators made available through Operator Hub. Red Hat workloads are not managed by the Stakater SRE team, and must be deployed on worker nodes and must be managed by the customer. +Red Hat workloads typically refer to Red Hat-provided operators made available through [Operator Hub](https://operatorhub.io/). Red Hat workloads are not managed by the Stakater SRE team, and must be deployed on worker nodes and must be managed by the customer, see [customer applications responsibilities](../responsibilities.md#data-and-applications). diff --git a/content/about/service-definition/security.md b/content/about/service-definition/security.md index 0205599e..0b88fe9d 100644 --- a/content/about/service-definition/security.md +++ b/content/about/service-definition/security.md @@ -1,8 +1,8 @@ # Security -## Authentication provider +## Authentication Provider -Authentication for the cluster is configured as part of cluster creation process. SAAP is not an identity provider, and all access to the cluster must be managed by the customer as part of their integrated solution. Provisioning multiple identity providers provisioned at the same time is supported. The following identity providers are supported: +Authentication for the cluster is configured as part of the [cluster creation process](../../for-administrators/create-your-cluster.md). SAAP is not an identity provider, and all access to the cluster must be managed by the customer as part of their integrated solution. Creating multiple identity providers at the same time is supported. The following identity providers are supported: - GitHub or GitHub Enterprise OAuth - GitLab OAuth @@ -10,14 +10,14 @@ Authentication for the cluster is configured as part of cluster creation process - LDAP - OpenID connect -## Privileged containers +## Privileged Containers -Privileged containers are not available by default on SAAP. The `anyuid` and `nonroot` Security Context Constraints (SCC) are available for members of the `sca` group, and should address many use cases. Privileged containers are only available for cluster-admin users. +Privileged containers are not available by default on SAAP. The `anyuid` and `nonroot` Security Context Constraints (SCC) are available for members of the `sca` (SAAP Cluster Admin) group, and should address many use cases. Privileged containers are only available for `sca` users. -## Customer administrator user +## Customer Administrator User -In addition to normal users +In addition to normal users, Stakater provides access to a SAAP-specific group called `sca`. The permissions for this role is described on the [roles in SAAP](../../for-cisos/authentication-authorization/saap-authorization-roles.md). -## Cluster administration role +## Cluster Administration Role -As an administrator +As an administrator of SAAP, you have access to the cluster-admin role. While logged in to an account with the cluster-admin role, users have mostly unrestricted access to control and configure the cluster. diff --git a/content/about/service-definition/service-mesh.md b/content/about/service-definition/service-mesh.md index 66e56a0d..51539151 100644 --- a/content/about/service-definition/service-mesh.md +++ b/content/about/service-definition/service-mesh.md @@ -1,19 +1,17 @@ # Service Mesh -SAAP provides an optional one fully managed service mesh control instance, it means that SAAP provides a pre-configured and managed service mesh infrastructure for handling service-to-service communication within a microservices architecture. +SAAP provides an optional fully managed service mesh control instance. SAAP provides a pre-configured and managed service mesh infrastructure for handling service-to-service communication within a microservices architecture: -Here's an explanation of the managed service mesh instance within SAAP: +- *Pre-configured Service Mesh Infrastructure*: SAAP includes a pre-configured instance of a service mesh, which is a dedicated infrastructure layer for managing and securing communication between services in a microservices architecture. This instance eliminates the need for administrators to manually configure and deploy a service mesh from scratch. -1. Pre-configured Service Mesh Infrastructure: SAAP includes a pre-configured instance of a service mesh, which is a dedicated infrastructure layer for managing and securing communication between services in a microservices architecture. This instance is already set up and ready to use, eliminating the need for administrators to manually configure and deploy a service mesh from scratch. +- *Simplified Service-to-Service Communication*: The managed service mesh instance within SAAP simplifies the complexity of service-to-service communication. It provides a consistent and reliable mechanism for handling communication between services, abstracting away the underlying networking details and allowing developers to focus on building and deploying their microservices. -2. Simplified Service-to-Service Communication: The managed service mesh instance within SAAP simplifies the complexity of service-to-service communication. It provides a consistent and reliable mechanism for handling communication between services, abstracting away the underlying networking details and allowing developers to focus on building and deploying their microservices. +- *Service Discovery and Load Balancing*: The managed service mesh instance offers built-in service discovery and load balancing capabilities. It automatically discovers services within the mesh and routes traffic to the appropriate instances, distributing the load evenly to optimize performance and ensure high availability. -3. Service Discovery and Load Balancing: The managed service mesh instance offers built-in service discovery and load balancing capabilities. It automatically discovers services within the mesh and routes traffic to the appropriate instances, distributing the load evenly to optimize performance and ensure high availability. +- *Traffic Management and Resilience*: SAAP's managed service mesh instance enables advanced traffic management techniques. It allows for fine-grained control over traffic routing, enabling features such as request routing, traffic splitting, and canary deployments. This allows organizations to implement various traffic management strategies and improve application resilience in the face of failures or traffic spikes. -4. Traffic Management and Resilience: SAAP's managed service mesh instance enables advanced traffic management techniques. It allows for fine-grained control over traffic routing, enabling features such as request routing, traffic splitting, and canary deployments. This allows organizations to implement various traffic management strategies and improve application resilience in the face of failures or traffic spikes. +- *Security and Encryption*: The managed service mesh instance within SAAP incorporates security features to secure communication between services. It includes built-in support for mutual TLS encryption, ensuring that all communication within the service mesh is encrypted and authenticated. This helps protect sensitive data and prevents unauthorized access to services within the mesh. -5. Security and Encryption: The managed service mesh instance within SAAP incorporates security features to secure communication between services. It includes built-in support for mutual TLS encryption, ensuring that all communication within the service mesh is encrypted and authenticated. This helps protect sensitive data and prevents unauthorized access to services within the mesh. - -6. Observability and Monitoring: SAAP's managed service mesh instance integrates with observability tools to provide insights into the performance and health of the microservices ecosystem. It offers monitoring and tracing functionalities, allowing organizations to track and analyze requests flowing through the service mesh, identify bottlenecks, and diagnose issues for troubleshooting and optimization. +- *Observability and Monitoring*: SAAP's managed service mesh instance integrates with observability tools to provide insights into the performance and health of the microservices ecosystem. It offers monitoring and tracing functionalities, allowing organizations to track and analyze requests flowing through the service mesh, identify bottlenecks, and diagnose issues for troubleshooting and optimization. By including a managed service mesh instance, SAAP simplifies and streamlines the deployment, management, and security of service-to-service communication within a microservices architecture. It provides a ready-to-use service mesh infrastructure that abstracts away the complexities and offers essential features for reliable, secure, and observable microservices communication. diff --git a/content/about/service-definition/storage.md b/content/about/service-definition/storage.md index e0e8ac41..8e0268f5 100644 --- a/content/about/service-definition/storage.md +++ b/content/about/service-definition/storage.md @@ -1 +1,5 @@ # Storage + +All storage needed for SAAP will be provided through the cloud provider of the customer's choice. + +Alerts is the only data that is forwarded to a third-party system, in this case a PagerDuty instance in the European Union. diff --git a/content/about/update-lifecycle.md b/content/about/update-lifecycle.md index 39e3cda2..4fb8ba4b 100644 --- a/content/about/update-lifecycle.md +++ b/content/about/update-lifecycle.md @@ -1 +1,38 @@ -# Update Lifecycle +# SAAP Update Life Cycle + +This update life cycle is published for customers and partners to effectively plan, deploy, and support their applications running on SAAP. + +SAAP is a managed instance of Red Hat OpenShift and maintains an independent release schedule. The availability of Security Advisories and Bug Fix Advisories for a specific version are dependent upon the Red Hat OpenShift Container Platform life cycle policy and subject to the SAAP maintenance schedule. + +## Semantic Versioning + +[Semantic versioning](https://semver.org/) and its corresponding terminology is used for SAAP versioning. + +## Major Versions + +Major versions of SAAP, for example version 1, are supported for one-year following the release of a subsequent major version or the retirement of the product. After this time, clusters would need to be upgraded or migrated to the next major version. + +## Minor Versions + +Stakater supports all minor versions for at least a one-year period following general availability of the given minor version. + +Customers are notified 30 days prior to the end of the support period. Clusters must be upgraded to a supported minor version prior to the end of the support period, or the cluster will enter a [Limited Support](#limited-support-status) status. + +## Patch Versions + +For reasons of platform security and stability, patch upgrades are prioritized according to [cluster versioning](./responsibilities.md#change-management). + +## Limited Support Status + +If a cluster transitions to a Limited Support status, Stakater no longer proactively monitors the cluster, the SLA is no longer applicable, and credits requested against the SLA are denied. It does not mean that you no longer have product support. In some cases, the cluster can return to a fully-supported status if you remediate the violating factors. + +SAAP transitions to a Limited Support status for these reasons: + +- If you do not agree to upgrade a cluster to a supported version before the end-of-life date +- If you remove or replace any native SAAP components or any other component that is installed and managed by Stakater + +If you have questions about a specific action that might cause a cluster to transition to a Limited Support status or need further assistance, open a [support ticket](https://support.stakater.com/index.html). + +## Supported Versions Exception Policy + +Stakater reserves the right to add or remove new or existing versions, or delay upcoming minor release versions, that have been identified to have one or more critical production impacting bugs or security issues without advance notice. diff --git a/content/for-administrators/create-your-cluster.md b/content/for-administrators/create-your-cluster.md new file mode 100644 index 00000000..e172341c --- /dev/null +++ b/content/for-administrators/create-your-cluster.md @@ -0,0 +1 @@ +# Create your cluster diff --git a/content/for-administrators/provision-your-cluster.md b/content/for-administrators/provision-your-cluster.md deleted file mode 100644 index 2ca58d4c..00000000 --- a/content/for-administrators/provision-your-cluster.md +++ /dev/null @@ -1 +0,0 @@ -# Provision your cluster diff --git a/content/for-administrators/secure-your-cluster/user-access.md b/content/for-administrators/secure-your-cluster/user-access.md index 2e6f0335..6956b1f7 100644 --- a/content/for-administrators/secure-your-cluster/user-access.md +++ b/content/for-administrators/secure-your-cluster/user-access.md @@ -16,7 +16,7 @@ SAAP Cluster is an administrator level role for a user (with restrictive access) - Administrate non-managed Projects/Namespaces - Install/Modify/Delete operators in non-managed Projects/Namespaces -To grant this permission to a user please open a support case with Username/Email of the desired user. +To grant this permission to a user please open a [support ticket](https://support.stakater.com/index.html) with Username/Email of the desired user. ## Tenant level Permissions diff --git a/content/for-cisos/authentication-authorization/google-idp.md b/content/for-cisos/authentication-authorization/google-idp.md index 03380bf0..e5639d16 100644 --- a/content/for-cisos/authentication-authorization/google-idp.md +++ b/content/for-cisos/authentication-authorization/google-idp.md @@ -6,7 +6,7 @@ To enable login with Google you first have to create a project and a client in t ![Developer console](./images/google-developer-console.png) -1. Click the `Create Project` button. Use any value for `Project name` and `Project ID` you want, then click the `Create` button. Wait for the project to be created (this may take a while). Once created you will be brought to the project’s dashboard. +1. Click the `Create Project` button. Use any value for `Project name` and `Project ID` you want, then click the `Create` button. Wait for the project to be created (this may take a while). Once created you will be brought to the project's dashboard. ![Project Dashboard](./images/google-dashboard.png) diff --git a/content/for-cisos/authentication-authorization/saap-authorization-roles.md b/content/for-cisos/authentication-authorization/saap-authorization-roles.md index b4fe8331..882cfd4b 100644 --- a/content/for-cisos/authentication-authorization/saap-authorization-roles.md +++ b/content/for-cisos/authentication-authorization/saap-authorization-roles.md @@ -1,15 +1,15 @@ # Roles in SAAP -Depending on responsibilities of a role, specific roles can be assigned to user groups, which enable them to achieve there daily tasks. Below is a list of roles provided by SAAP for different user groups +Depending on responsibilities of a role, specific roles can be assigned to user groups, which enable them to achieve there daily tasks. Below is a list of roles provided by SAAP for different user groups. Namespaces are divided into two sub-categories: - **Stakater owned** : created by the Stakater team which consists of projects/namespaces with format `openshift*`, `stakater*`, `kube*`, `redhat*`, `default` - **Customer owned** : created by the customer -## 1.SAAP Cluster Admin (SCA) +## SAAP Cluster Admin (`sca`) role -SAAP Cluster Admin (SCA): +The permissions for the SAAP Cluster Admin (`sca`) role includes: ### Operators Permissions @@ -22,26 +22,26 @@ SAAP Cluster Admin (SCA): - can install operators in customer owned namespace - can manage subscriptions in customer owned namespace - can not install privileged and custom operators cluster-wide -- can view sealedsecrets custom resource in all namespaces +- can view `sealedsecrets` custom resource in all namespaces ### Projects Permissions - can create/update/patch customer owned namespaces - can create/view/edit/delete all resources in customer owned namespaces - can only view resources in Stakater owned namespaces -- can not view secrets, configmaps ,jobs and cronjobs in Stakater owned namespaces +- can not view `secrets`, `configmaps` , `jobs` and `cronjobs` in Stakater owned namespaces ### Storage -- can create/view/edit persistentvolumeclaims,storageclasses and volumesnapshots in the cluster -- can not delete persistentvolumeclaims,storageclasses and volumesnapshots in the cluster +- can create/view/edit `persistentvolumeclaims`, `storageclasses`, and `volumesnapshots` in the cluster +- can not delete `persistentvolumeclaims`, `storageclasses` and `volumesnapshots` in the cluster ### Networking -- can create/view/delete NetworkPolicy objects in customer owned namespaces +- can create/view/delete `NetworkPolicy` objects in customer owned namespaces - can view services in all namespaces - can view routes and ingresses in all namespaces -- can view/update DNS resources for DNS Forwarder apigroups in customer owned namespaces +- can view/update DNS resources for DNS Forwarder `apigroups` in customer owned namespaces ### Monitoring @@ -59,7 +59,7 @@ SAAP Cluster Admin (SCA): - can view users/groups - can view service accounts/roles/role bindings in customer owned namespaces -- can create/view on UserIdentityMappings +- can create/view on `UserIdentityMappings` - can create/verify tokens and access - can not delete members from cluster-admin - can create `admin` rolebinding on customer owned namespaces @@ -75,11 +75,11 @@ SAAP Cluster Admin (SCA): ### Administration - can create/edit/delete resource quotas and limits on the cluster -- can access the reserved `saap-cluster-admin` project on the cluster, which allows for the creation of ServiceAccounts with elevated privileges and gives the ability to update default limits and quotas for projects on the cluster +- can access the reserved `saap-cluster-admin` project on the cluster, which allows for the creation of `ServiceAccounts` with elevated privileges and gives the ability to update default limits and quotas for projects on the cluster - `saap-cluster-admin` service account can create project - `saap-cluster-admin` service account can delete project - `saap-cluster-admin` service account cannot edit/create rolebinding -- can not create/edit/delete clusterresourcequotas +- can not create/edit/delete `clusterresourcequotas` Only the mentioned permissions above are present for the role, for any other permission required the user need to raise a case with Stakater Support team. diff --git a/content/for-delivery-engineers/gitops/application-onboarding.md b/content/for-delivery-engineers/gitops/application-onboarding.md index dad8ed65..017d2348 100644 --- a/content/for-delivery-engineers/gitops/application-onboarding.md +++ b/content/for-delivery-engineers/gitops/application-onboarding.md @@ -74,7 +74,7 @@ EXPOSE 4200 CMD ["node", "server.js"] ``` -> Create [multi-stage builds](https://docs.docker.com/build/building/multi-stage/), use multiple `FROM` statements. Each `FROM` instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image. The end result is the same tiny production image as before, with a significant reduction in complexity. You don’t need to create any intermediate images, and you don’t need to extract any artifacts to your local system at all. +> Create [multi-stage builds](https://docs.docker.com/build/building/multi-stage/), use multiple `FROM` statements. Each `FROM` instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don't want in the final image. The end result is the same tiny production image as before, with a significant reduction in complexity. You don't need to create any intermediate images, and you don't need to extract any artifacts to your local system at all. Look into the following dockerizing guides for a start. diff --git a/content/for-delivery-engineers/gitops/faq.md b/content/for-delivery-engineers/gitops/faq.md index 935a8e0b..73aa11d2 100644 --- a/content/for-delivery-engineers/gitops/faq.md +++ b/content/for-delivery-engineers/gitops/faq.md @@ -10,7 +10,7 @@ _GitOps builds on top of Infrastructure as Code, providing application level con ## 3. Can I use a CI server to orchestrate convergence in the cluster? -_You could apply updates to the cluster from the CI server, but it won’t continuously deploy the changes to the cluster, which means that drift won’t be detected and corrected._ +_You could apply updates to the cluster from the CI server, but it won't continuously deploy the changes to the cluster, which means that drift won't be detected and corrected._ ## 4. Should I abandon my CI tool? diff --git a/content/for-developers/explanation/inner-loop.md b/content/for-developers/explanation/inner-loop.md index 274cb80e..effce5a9 100644 --- a/content/for-developers/explanation/inner-loop.md +++ b/content/for-developers/explanation/inner-loop.md @@ -19,6 +19,6 @@ Changes to the inner dev loop process, i.e., containerization, threaten to slow - pushing the container to the registry - deploying containers in Kubernetes -Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that’s a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. +Each new step within the container inner dev loop adds to overall development time, and developers are repeating this process frequently. If the build time is incremented to 5 minutes — not atypical with a standard container build, registry upload, and deploy — then the number of possible development iterations per day drops to ~40. At the extreme that's a 40% decrease in potential new features being released. This new container build step is a hidden tax, which is quite expensive. ![After Microservices](./images/local-development-4.png) diff --git a/content/for-developers/tutorials/00-prepare-environment/step-by-step-guide.md b/content/for-developers/tutorials/00-prepare-environment/step-by-step-guide.md index d403af14..8ad0849c 100644 --- a/content/for-developers/tutorials/00-prepare-environment/step-by-step-guide.md +++ b/content/for-developers/tutorials/00-prepare-environment/step-by-step-guide.md @@ -54,9 +54,9 @@ In this guide we will deploy an application with tilt and namespace in remote Op HOST=image-registry-openshift-image-registry.apps.[CLUSTER-NAME].[CLUSTER-ID].kubeapp.cloud ``` - NOTE: Ask SCA (SAAP Cluster Admin) or cluster-admin to provide you the OpenShift internal registry route + NOTE: Ask `sca` (SAAP Cluster Admin) or `cluster-admin` to provide you the OpenShift internal registry route - Then login into docker registry with following command + Then login into docker registry with following command: ```bash docker login -u $(oc whoami) -p $(oc whoami -t) $HOST diff --git a/content/for-developers/tutorials/02-containerize-app/containerize-app.md b/content/for-developers/tutorials/02-containerize-app/containerize-app.md index 0c4a74e1..22c69948 100644 --- a/content/for-developers/tutorials/02-containerize-app/containerize-app.md +++ b/content/for-developers/tutorials/02-containerize-app/containerize-app.md @@ -31,7 +31,7 @@ Lets create a Dockerfile inside the repository folder and delete any existing fi RUN mvn -f /usr/src/app/pom.xml clean package ``` - 3. We will use another FROM statement to create a multi-stage build for reducing the overall image size. More info [here](https://docs.docker.com/build/building/multi-stage/). With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image. + 3. We will use another FROM statement to create a multi-stage build for reducing the overall image size. More info [here](https://docs.docker.com/build/building/multi-stage/). With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don't want in the final image. ```Dockerfile FROM registry.access.redhat.com/ubi8/openjdk-11:1.14-10 diff --git a/content/for-developers/tutorials/11-expose-metrics/expose-metrics.md b/content/for-developers/tutorials/11-expose-metrics/expose-metrics.md index 683f6b89..35aea83e 100644 --- a/content/for-developers/tutorials/11-expose-metrics/expose-metrics.md +++ b/content/for-developers/tutorials/11-expose-metrics/expose-metrics.md @@ -46,7 +46,7 @@ To learn more about exposing metrics in spring boot you can refer to this [guide ## Adding Custom Metric to application -A lot of the time you’ll be satisfied by the basic metrics you get out of the box with Micrometer. But you might want to add your own custom metrics. +A lot of the time you'll be satisfied by the basic metrics you get out of the box with Micrometer. But you might want to add your own custom metrics. To add a custom metric to our application, we will again be using micrometer. Micrometer can publish different types of metrics, called primitives. These include gauge, counter and timer. diff --git a/content/help/faqs/operations.md b/content/help/faqs/operations.md index 86e35286..9b8a4fd8 100644 --- a/content/help/faqs/operations.md +++ b/content/help/faqs/operations.md @@ -28,13 +28,13 @@ Node access is forbidden. ## How to add new worker nodes to the cluster? -You need to open a support case; until the feature is added to portal +You need to open a [support ticket](https://support.stakater.com/index.html); until the feature is added to portal ## How do I make configuration changes to my cluster? An administrative user has the ability to add/remove users and projects, manage project quotas, view cluster usage statistics, and change the default project template. Admins can also scale a cluster up or down, and even delete an existing cluster. -You need to open a support case; until we allow customers to have cluster admins. +You need to open a [support ticket](https://support.stakater.com/index.html); until we allow customers to have cluster admins. ## Can logs of underlying VMs be streamed out to a customer log analysis system? diff --git a/content/help/faqs/purchasing.md b/content/help/faqs/purchasing.md index 7938f965..a30487a4 100644 --- a/content/help/faqs/purchasing.md +++ b/content/help/faqs/purchasing.md @@ -22,4 +22,4 @@ Customers will be directly billed by Stakater only. ## Do I need to sign a separate contract with Red Hat to use the service? -No, you don’t need to sign a contract with Red Hat. Customers will be billed by Stakater only. +No, you don't need to sign a contract with Red Hat. Customers will be billed by Stakater only. diff --git a/content/help/k8s-concepts/cloud-native-app.md b/content/help/k8s-concepts/cloud-native-app.md index 174fbcac..b5dd22b5 100644 --- a/content/help/k8s-concepts/cloud-native-app.md +++ b/content/help/k8s-concepts/cloud-native-app.md @@ -67,25 +67,25 @@ Cloud-native applications must always consist of a single codebase that is track The single codebase for an application is used to produce any number of immutable releases that are destined for different environments. Following this particular discipline forces teams to analyze the seams of their application and potentially identify monoliths that should be split off into microservices. If you have multiple codebases, then you have a system that needs to be decomposed, not a single application. -The simplest example of violating this guideline is where your application is actually made of up a dozen or more source code repositories. This makes it nearly impossible to automate the build and deploy phases of your application’s life cycle. +The simplest example of violating this guideline is where your application is actually made of up a dozen or more source code repositories. This makes it nearly impossible to automate the build and deploy phases of your application's life cycle. Another way this rule is often broken is when there is a main application and a tightly coupled worker (or an en-queuer and de-queuer, etc.) that collaborate on the same units of work. In scenarios like this, there are actually multiple codebases supporting a single application, even if they share the same source repository root. This is why I think it is important to note that the concept of a codebase needs to imply a more cohesive unit than just a repository in your version control system. Conversely, this rule can be broken when one codebase is used to produce multiple applications. For example, a single codebase with multiple launch scripts or even multiple points of execution within a single wrapper module. In the Java world, EAR files are a gateway drug to violating the one codebase rule. In the interpreted language world (e.g., Ruby), you might have multiple launch scripts within the same codebase, each performing an entirely different task. -Multiple applications within a single codebase are often a sign that multiple teams are maintaining a single codebase, which can get ugly for a number of reasons. Conway’s law states that the organization of a team will eventually be reflected in the architecture of the product that team builds. In other words, dysfunction, poor organization, and lack of discipline among teams usually results in the same dysfunction or lack of discipline in the code. +Multiple applications within a single codebase are often a sign that multiple teams are maintaining a single codebase, which can get ugly for a number of reasons. Conway's law states that the organization of a team will eventually be reflected in the architecture of the product that team builds. In other words, dysfunction, poor organization, and lack of discipline among teams usually results in the same dysfunction or lack of discipline in the code. -In situations where you have multiple teams and a single codebase, you may want to take advantage of Conway’s law and dedicate smaller teams to individual applications or microservices. +In situations where you have multiple teams and a single codebase, you may want to take advantage of Conway's law and dedicate smaller teams to individual applications or microservices. When looking at your application and deciding on opportunities to reorganize the codebase and teams onto smaller products, you may find that one or more of the multiple codebases contributing to your application could be split out and converted into a microservice or API that can be reused by multiple applications. -In other words, one codebase, one application does not mean you’re not allowed to share code across multiple applications; it just means that the shared code is yet another codebase. +In other words, one codebase, one application does not mean you're not allowed to share code across multiple applications; it just means that the shared code is yet another codebase. -This also doesn’t mean that all shared code needs to be a microservice. Rather, you should evaluate whether the shared code should be considered a separately released product that can then be vendored into your application as a dependency. +This also doesn't mean that all shared code needs to be a microservice. Rather, you should evaluate whether the shared code should be considered a separately released product that can then be vendored into your application as a dependency. **Why?** -Makes it nearly impossible to automate the build and deploy phases of your application’s life cycle. +Makes it nearly impossible to automate the build and deploy phases of your application's life cycle. **How?** @@ -103,9 +103,9 @@ A cloud-native application never relies on implicit existence of system-wide pac Not properly isolating dependencies can cause untold problems. In some of the most common dependency-related problems, you could have a developer working on version X of some dependent library on his workstation, but version X+1 of that library has been installed in a central location in production. This can cause everything from runtime failures all the way up to insidious and difficult to diagnose subtle failures. If left untreated, these types of failures can bring down an entire server or cost a company millions through undiagnosed data corruption. -Properly managing your application’s dependencies is all about the concept of repeatable deployments. Nothing about the runtime into which an application is deployed should be assumed that isn’t automated. In an ideal world, the application’s container is bundled (or bootstrapped, as some frameworks called it) inside the app’s release artifact—or better yet, the application has no container at all. +Properly managing your application's dependencies is all about the concept of repeatable deployments. Nothing about the runtime into which an application is deployed should be assumed that isn't automated. In an ideal world, the application's container is bundled (or bootstrapped, as some frameworks called it) inside the app's release artifact—or better yet, the application has no container at all. -However, for some enterprises, it just isn’t practical (or possible, even) to embed a server or container in the release artifact, so it has to be combined with the release artifact, which, in many cloud environments like Heroku or Cloud Foundry, is handled by something called a buildpack. +However, for some enterprises, it just isn't practical (or possible, even) to embed a server or container in the release artifact, so it has to be combined with the release artifact, which, in many cloud environments like Heroku or Cloud Foundry, is handled by something called a buildpack. Applying discipline to dependency management will bring your applications one step closer to being able to thrive in cloud environments. @@ -119,15 +119,15 @@ Many of these tools also have the ability to isolate dependencies. This is done **What?** -Recognize your API as a first-class artifact of the development process, API first gives teams the ability to work against each other’s public contracts without interfering with internal development processes. +Recognize your API as a first-class artifact of the development process, API first gives teams the ability to work against each other's public contracts without interfering with internal development processes. **Why?** -Even if you’re not planning on building a service as part of a larger ecosystem, the discipline of starting all of your development at the API level still pays enough dividends to make it worth your time. +Even if you're not planning on building a service as part of a larger ecosystem, the discipline of starting all of your development at the API level still pays enough dividends to make it worth your time. Built into every decision you make and every line of code you write is the notion that every functional requirement of your application will be met through the consumption of an API. Even a user interface, be it web or mobile, is really nothing more than a consumer of an API. -By designing your API first, you are able to facilitate discussion with your stakeholders (your internal team, customers, or possibly other teams within your organization who want to consume your API) well before you might have coded yourself past the point of no return. This collaboration then allows you to build user stories, mock your API, and generate documentation that can be used to further socialize the intent and functionality of the service you’re building. +By designing your API first, you are able to facilitate discussion with your stakeholders (your internal team, customers, or possibly other teams within your organization who want to consume your API) well before you might have coded yourself past the point of no return. This collaboration then allows you to build user stories, mock your API, and generate documentation that can be used to further socialize the intent and functionality of the service you're building. There is absolutely no excuse for claiming that API first is a difficult or unsupported path. This is a pattern that can be applied to non-cloud software development, but it is particularly well suited to cloud development in its ability to allow rapid prototyping, support a services ecosystem, and facilitate the automated deployment testing and continuous delivery pipelines that are some of the hallmarks of modern cloud-native application development. @@ -149,7 +149,7 @@ Stakater App Agility Platform offers a fully managed 3Scale API Gateway add-on t In the world of waterfall application development, we spend an inordinate amount of time designing an application before a single line of code is written. This type of software development life cycle is not well suited to business situations with high uncertainty and high expectations of fast delivery. Agile works better then. Waterfall works better in business situations with high regulation, low uncertainty, clear expectations, and clear timelines. -However, this doesn’t mean that we don’t design at all in Agile. Instead, it means we design small features that get released, and we have a high-level design that is used to inform everything we do; but we also know that designs change, and small amounts of design are part of every iteration rather than being done entirely up front. +However, this doesn't mean that we don't design at all in Agile. Instead, it means we design small features that get released, and we have a high-level design that is used to inform everything we do; but we also know that designs change, and small amounts of design are part of every iteration rather than being done entirely up front. The application developer best understands the application dependencies, and it is during the design phase that arrangements are made to declare dependencies as well as the means by which those dependencies are vendored, or bundled, with the application. In other words, the developer decides what libraries the application is going to use, and how those libraries are eventually going to be bundled into an immutable release. @@ -159,7 +159,7 @@ The build stage is where a code repository is converted into a versioned, binary Builds are ideally created by a Continuous Integration server, and there is a `1:many` relationship between builds and deployments. A single build should be able to be released or deployed to any number of environments, and each of those unmodified builds should work as expected. The immutability of this artifact and adherence to the other factors (especially environment parity) give you confidence that your app will work in production if it worked in QA. -If you ever find yourself troubleshooting "works on my machine" problems, that is a clear sign that the four stages of this process are likely not as separate as they should be. Forcing your team to use a CI server may often seem like a lot of upfront work, but once running, you’ll see that the “one build, many deploys” pattern works. +If you ever find yourself troubleshooting "works on my machine" problems, that is a clear sign that the four stages of this process are likely not as separate as they should be. Forcing your team to use a CI server may often seem like a lot of upfront work, but once running, you'll see that the “one build, many deploys” pattern works. Once you have confidence that your codebase will work anywhere it should, and you no longer fear production releases, you will start to see some of the truly amazing benefits of adopting the cloud-native philosophy, like continuous deployment and releases that happen hours after a commit rather than months. @@ -169,21 +169,21 @@ In the cloud-native world, the release is typically done by pushing to your clou Releases need to be unique, and every release should ideally be tagged with some kind of unique ID, such as a timestamp or an auto-incremented number. Thinking back to the `1:many` relationship between builds and releases, it makes sense that releases should not be tagged with the build ID. -Let’s say that your CI system has just built your application and labeled that artifact build-1234. The CI system might then release that application to the dev, staging, and production environments. The scheme is up to you, but each of those releases should be unique because each one combined the original build with environment specific configuration settings. +Let's say that your CI system has just built your application and labeled that artifact build-1234. The CI system might then release that application to the dev, staging, and production environments. The scheme is up to you, but each of those releases should be unique because each one combined the original build with environment specific configuration settings. If something goes wrong, you want the ability to audit what you have released to a given environment and, if necessary, to roll back to the previous release. This is another key reason for keeping releases both immutable and uniquely identified. -There are a million different types of problems that arise from an organization’s inability to reproduce a release as it appeared at one point in the past. By having separate build and release phases, and storing those artifacts, rollback and historical auditing is possible. +There are a million different types of problems that arise from an organization's inability to reproduce a release as it appeared at one point in the past. By having separate build and release phases, and storing those artifacts, rollback and historical auditing is possible. ### Run The run phase is also typically done by the cloud provider (although developers need be able to run applications locally). The details vary among providers, but the general pattern is that your application is placed within some kind of container (Docker, Garden, Warden, etc.), and then a process is started to launch your application. -It’s worth noting that ensuring that a developer can run an application locally on her workstation while still allowing it to be deployed to multiple clouds via CD pipeline is often a difficult problem to solve. It is worth solving, however, because developers need to feel unhindered while working on cloud-native applications. +It's worth noting that ensuring that a developer can run an application locally on her workstation while still allowing it to be deployed to multiple clouds via CD pipeline is often a difficult problem to solve. It is worth solving, however, because developers need to feel unhindered while working on cloud-native applications. When an application is running, the cloud runtime is then responsible for keeping it alive, monitoring its health, and aggregating its logs, as well as a mountain of other administrative tasks like dynamic scaling and fault tolerance. -Ultimately, the goal of this guidance is to maximize your delivery speed while keeping high confidence through automated testing and deployment. We get some agility and speed benefits out of the box when working on the cloud; but if we follow the guidelines in this chapter, we can squeeze every ounce of speed and agility out of our product release pipeline without sacrificing our confidence in our application’s ability to do its job. +Ultimately, the goal of this guidance is to maximize your delivery speed while keeping high confidence through automated testing and deployment. We get some agility and speed benefits out of the box when working on the cloud; but if we follow the guidelines in this chapter, we can squeeze every ounce of speed and agility out of our product release pipeline without sacrificing our confidence in our application's ability to do its job. **Why?** @@ -218,17 +218,17 @@ In order to be able to keep configuration separate from code and credentials, we - Credentials to third-party services such as Amazon AWS or APIs like Google Maps, Twitter, and Facebook - Information that might normally be bundled in properties files or configuration XML, or YML -Configuration does not include internal information that is part of the application itself. Again, if the value remains the same across all deployments (it is intentionally part of your immutable build artifact), then it isn’t configuration. +Configuration does not include internal information that is part of the application itself. Again, if the value remains the same across all deployments (it is intentionally part of your immutable build artifact), then it isn't configuration. **Why?** -Credentials are extremely sensitive information and have absolutely no business in a codebase. Oftentimes, developers will extract credentials from the compiled source code and put them in properties files or XML configuration, but this hasn’t actually solved the problem. Bundled resources, including XML and properties files, are still part of the codebase. This means credentials bundled in resource files that ship with your application are still violating this rule. +Credentials are extremely sensitive information and have absolutely no business in a codebase. Oftentimes, developers will extract credentials from the compiled source code and put them in properties files or XML configuration, but this hasn't actually solved the problem. Bundled resources, including XML and properties files, are still part of the codebase. This means credentials bundled in resource files that ship with your application are still violating this rule. -If the general public were to have access to your code, have you exposed sensitive information about the resources or services on which your application relies? Can people see internal URLs, credentials to backing services, or other information that is either sensitive or irrelevant to people who don’t work in your target environments? +If the general public were to have access to your code, have you exposed sensitive information about the resources or services on which your application relies? Can people see internal URLs, credentials to backing services, or other information that is either sensitive or irrelevant to people who don't work in your target environments? -If you can open source your codebase without exposing sensitive or environment-specific information, then you’ve probably done a good job isolating your code, configuration, and credentials. +If you can open source your codebase without exposing sensitive or environment-specific information, then you've probably done a good job isolating your code, configuration, and credentials. -It should be immediately obvious why we don’t want to expose credentials, but the need for external configuration is often not as obvious. External configuration supports our ability to deploy immutable builds to multiple environments automatically via CD pipelines and helps us maintain development/production environment parity. +It should be immediately obvious why we don't want to expose credentials, but the need for external configuration is often not as obvious. External configuration supports our ability to deploy immutable builds to multiple environments automatically via CD pipelines and helps us maintain development/production environment parity. **How?** @@ -256,7 +256,7 @@ By combining liveness and readiness probes, you can instruct Kubernetes to autom **Why?** -Readiness probes allow your application to report when it should start receiving traffic. This is always what marks a pod ‘Ready’ in the cluster. +Readiness probes allow your application to report when it should start receiving traffic. This is always what marks a pod ‘Ready' in the cluster. Health checks (often custom HTTP endpoints) help orchestrators, like Kubernetes, perform automated actions to maintain overall system health. These can be a simple HTTP route that returns meaningful values, or a command that can be executed from within the container. @@ -286,7 +286,7 @@ When your applications are decoupled from the knowledge of log storage, processi One of the many reasons your application should not be controlling the ultimate destiny of its logs is due to elastic scalability. When you have a fixed number of instances on a fixed number of servers, storing logs on disk seems to make sense. However, when your application can dynamically go from 1 running instance to 100, and you have no idea where those instances are running, you need your cloud provider to deal with aggregating those logs on your behalf. -Simplifying your application’s log emission process allows you to reduce your codebase and focus more on your application’s core business value. +Simplifying your application's log emission process allows you to reduce your codebase and focus more on your application's core business value. **How?** @@ -308,7 +308,7 @@ A backing service is any service on which your application relies for its functi **Why?** -When building applications designed to run in a cloud environment where the filesystem must be considered ephemeral, you also need to treat file storage or disk as a backing service. You shouldn’t be reading to or writing from files on disk like you might with regular enterprise applications. Instead, file storage should be a backing service that is bound to your application as a resource. +When building applications designed to run in a cloud environment where the filesystem must be considered ephemeral, you also need to treat file storage or disk as a backing service. You shouldn't be reading to or writing from files on disk like you might with regular enterprise applications. Instead, file storage should be a backing service that is bound to your application as a resource. A bound resource is really just a means of connecting your application to a backing service. A resource binding for a database might include a username, a password, and a URL that allows your application to consume that resource. @@ -324,7 +324,7 @@ This means that there is never a line of code in your application that tightly c Finally, one of the biggest advantages to treating backing services as bound resources is that when you develop an application with this in mind, it becomes possible to attach and detach bound resources at will. -Let’s say one of the databases on which your application relies is not responding. This causes a cascading failure effect and endangers your application. A classic enterprise application would be helpless and at the mercy of the flailing database. +Let's say one of the databases on which your application relies is not responding. This causes a cascading failure effect and endangers your application. A classic enterprise application would be helpless and at the mercy of the flailing database. ### Circuit Breakers @@ -365,9 +365,9 @@ Finally, health and system logs are something that should be provided by your cl The cloud makes many things easy, but monitoring and telemetry are still difficult, probably even more difficult than traditional, enterprise application monitoring. When you are facing a stream that contains regular health checks, request audits, business-level events, and tracking data, and performance metrics, that is an incredible amount of data. -When planning your monitoring strategy, you need to take into account how much information you’ll be aggregating, the rate at which it comes in, and how much of it you’re going to store. If your application dynamically scales from 1 instance to 100, that can also result in a hundredfold increase in your log traffic. +When planning your monitoring strategy, you need to take into account how much information you'll be aggregating, the rate at which it comes in, and how much of it you're going to store. If your application dynamically scales from 1 instance to 100, that can also result in a hundredfold increase in your log traffic. -Auditing and monitoring cloud applications are often overlooked but are perhaps some of the most important things to plan and do properly for production deployments. If you wouldn’t blindly launch a satellite into orbit with no way to monitor it, you shouldn’t do the same to your cloud application. +Auditing and monitoring cloud applications are often overlooked but are perhaps some of the most important things to plan and do properly for production deployments. If you wouldn't blindly launch a satellite into orbit with no way to monitor it, you shouldn't do the same to your cloud application. Getting telemetry done right can mean the difference between success and failure in the cloud. @@ -488,7 +488,7 @@ One question that we field on a regular basis stems from confusion around the co A stateless application makes no assumptions about the contents of memory prior to handling a request, nor does it make assumptions about memory contents after handling that request. The application can create and consume transient state in the middle of handling a request or processing a transaction, but that data should all be gone by the time the client has been given a response. -To put it as simply as possible, all long-lasting state must be external to the application, provided by backing services. So the concept isn’t that state cannot exist; it is that it cannot be maintained within your application. +To put it as simply as possible, all long-lasting state must be external to the application, provided by backing services. So the concept isn't that state cannot exist; it is that it cannot be maintained within your application. As an example, a microservice that exposes functionality for user management must be stateless, so the list of all users is maintained in a backing service (an Oracle or MongoDB database, for instance). For obvious reasons, it would make no sense for a database to be stateless. @@ -496,7 +496,7 @@ As an example, a microservice that exposes functionality for user management mus Processes often communicate with each other by sharing common resources. Even without considering the move to the cloud, there are a number of benefits to be gained from adopting the Share-Nothing pattern. Firstly, anything shared among processes is a liability that makes all of those processes more brittle. In many high-availability patterns, processes will share data through a wide variety of techniques to elect cluster leaders, to decide on whether a process is a primary or backup, and so on. -All of these options need to be avoided when running in the cloud. Your processes can vanish at a moment’s notice with no warning, and that’s a good thing. Processes come and go, scale horizontally and vertically, and are highly disposable. This means that anything shared among processes could also vanish, potentially causing a cascading failure. +All of these options need to be avoided when running in the cloud. Your processes can vanish at a moment's notice with no warning, and that's a good thing. Processes come and go, scale horizontally and vertically, and are highly disposable. This means that anything shared among processes could also vanish, potentially causing a cascading failure. It should go without saying, but the filesystem is not a backing service. This means that you cannot consider files a means by which applications can share data. Disks in the cloud are ephemeral and, in some cases, even read-only. @@ -566,7 +566,7 @@ Maintaining environment parity has become easier in the last few years because d The Environment Parity principle means all deployment paths are similar yet independent and that no deployment "leapfrogs" into another deployment target. -Backing services, such as the app’s database, queueing system, or cache, is one area where dev/prod parity is important. Many languages offer libraries which simplify access to the backing service, including adapters to different types of services. +Backing services, such as the app's database, queueing system, or cache, is one area where dev/prod parity is important. Many languages offer libraries which simplify access to the backing service, including adapters to different types of services. Developers sometimes find great appeal in using a lightweight backing service in their local environments, while a more serious and robust backing service will be used in production. For example, using SQLite locally and PostgreSQL in production; or local process memory for caching in development and Memcached in production. @@ -588,15 +588,15 @@ From an app's point of view, APIs provide access to the apps in your enterprise **Why?** -A cloud-native application is a secure application. Your code, whether compiled or raw, is transported across many data centers, executed within multiple containers, and accessed by countless clients some legitimate, most nefarious. Even if the only reason you implement security in your application is so you have an audit trail of which user made which data change, that alone is benefit enough to justify the relatively small amount of time and effort it takes to secure your application’s endpoints. +A cloud-native application is a secure application. Your code, whether compiled or raw, is transported across many data centers, executed within multiple containers, and accessed by countless clients some legitimate, most nefarious. Even if the only reason you implement security in your application is so you have an audit trail of which user made which data change, that alone is benefit enough to justify the relatively small amount of time and effort it takes to secure your application's endpoints. -In an ideal world, all cloud-native applications would secure all of their endpoints with RBAC (role-based access control). Every request for an application’s resources should know who is making the request, and the roles to which that consumer belongs. These roles dictate whether the calling client has sufficient permission for the application to honor the request. +In an ideal world, all cloud-native applications would secure all of their endpoints with RBAC (role-based access control). Every request for an application's resources should know who is making the request, and the roles to which that consumer belongs. These roles dictate whether the calling client has sufficient permission for the application to honor the request. **How?** Considerations for helping to protect access to your app include the following: -- With tools like OAuth2, OpenID Connect, various SSO servers and standards, as well as a near infinite supply of language-specific authentication and authorization libraries, security should be something that is baked into the application’s development from day one, and not added as a bolt-on project after an application is running in production. +- With tools like OAuth2, OpenID Connect, various SSO servers and standards, as well as a near infinite supply of language-specific authentication and authorization libraries, security should be something that is baked into the application's development from day one, and not added as a bolt-on project after an application is running in production. - Transport Layer Security (TLS). Use TLS to help protect data in transit. You might want to use mutual TLS for your business apps; this is made easier if you use service meshes like Istio on Kubernetes. It's also common for some use cases to create allow lists and deny lists based on IP addresses as an additional layer of security. Transport security also involves protecting your services against DDoS and bot attacks. - App and end-user security. Transport security helps provide security for data in transit and establishes trust. But it's a best practice to add app-level security to control access to your app based on who the consumer of the app is. The consumers can be other apps, employees, partners, or your enterprise's end customers. You can enforce security using API keys (for consuming apps), certification-based authentication and authorization, JSON Web Tokens (JWTs) exchange, or Security Assertion Markup Language (SAML). @@ -839,7 +839,7 @@ If you need data persistence for your application, work with your platform team **Why?** -Your application’s container filesystem is considered ephemeral. Meaning it will not move with the workload. This ephemeral storage is typically resource constrained and should not be used for anything more than small write needs, where loss of data is not a concern. +Your application's container filesystem is considered ephemeral. Meaning it will not move with the workload. This ephemeral storage is typically resource constrained and should not be used for anything more than small write needs, where loss of data is not a concern. **How?** diff --git a/content/help/k8s-concepts/helm.md b/content/help/k8s-concepts/helm.md index 0cd1dce0..50ff8836 100644 --- a/content/help/k8s-concepts/helm.md +++ b/content/help/k8s-concepts/helm.md @@ -1,6 +1,6 @@ # Helm -Interacting directly with Kubernetes involves either manual configuration using the kubectl command line utility, or passing various flavors of YAML data to the API. This can be complex and is open to human error creeping in. In keeping with the DevOps principle of ‘configuration as code’, we leverage Helm to create atomic blocks of configuration for your applications. +Interacting directly with Kubernetes involves either manual configuration using the kubectl command line utility, or passing various flavors of YAML data to the API. This can be complex and is open to human error creeping in. In keeping with the DevOps principle of ‘configuration as code', we leverage Helm to create atomic blocks of configuration for your applications. Helm simplifies Kubernetes configuration through the concept of a Chart, which is a set of files that together specify the meta-data necessary to deploy a given application or service into Kubernetes. Rather than maintain a series of boilerplate YAML files based upon the Kubernetes API, Helm uses a templating language to create the required YAML specifications from a single shared set of values. This makes it possible to specify re-usable Kubernetes applications where configuration can be selectively over-ridden at deployment time. @@ -75,11 +75,11 @@ As mentioned before a Helm chart version is completely different than the applic ### 1. Simple 1-1 versioning -This is the most basic versioning approach and it is the suggested one if you are starting out with Helm. Don’t use the `appVersion` field at all (it is optional anyway) and just keep the chart version in sync with your actual application. +This is the most basic versioning approach and it is the suggested one if you are starting out with Helm. Don't use the `appVersion` field at all (it is optional anyway) and just keep the chart version in sync with your actual application. This approach makes version bumping very easy (you bump everything up) and also allows you to quickly track what application version is deployed on your cluster (same as chart version). -The downside of this approach is that you can’t track chart changes separately. +The downside of this approach is that you can't track chart changes separately. ![Chart Version Single](./images/chart-version-single.jpeg) diff --git a/content/legal-documents/sla.md b/content/legal-documents/sla.md index 2a88e20f..8da52304 100644 --- a/content/legal-documents/sla.md +++ b/content/legal-documents/sla.md @@ -6,7 +6,7 @@ This SERVICE LEVEL AGREEMENT ("**SLA**") is by and between **Stakater** and you ## 1. Term -- 1.1 This SLA is effective from the time the Customer uses the Service and will automatically renew every Service Period, unless a Party gives an at least thirty (30) days’ notice of termination in writing to the other Party. +- 1.1 This SLA is effective from the time the Customer uses the Service and will automatically renew every Service Period, unless a Party gives an at least thirty (30) days' notice of termination in writing to the other Party. ## 2. The Services @@ -32,7 +32,7 @@ This SERVICE LEVEL AGREEMENT ("**SLA**") is by and between **Stakater** and you - 5.1 "**Downtime Period**" means a period of at least ten (10) consecutive minutes of Downtime. Intermittent Downtime for a period of less than ten (10) minutes or less will not be counted towards any Downtime Periods. -- 5.2 Subject to the Exclusions detailed in [Section 13](#13-sla-exclusions), the Service will be considered unavailable (and an "**SLA Event**" will be deemed as having taken place) if Stakater’s monitoring detects that the Service or its component has failed for ten (10) consecutive minutes. +- 5.2 Subject to the Exclusions detailed in [Section 13](#13-sla-exclusions), the Service will be considered unavailable (and an "**SLA Event**" will be deemed as having taken place) if Stakater's monitoring detects that the Service or its component has failed for ten (10) consecutive minutes. - 5.3 Downtime is calculated from the time that Stakater confirms the SLA event has occurred, until the time that Stakater resolves the issue and the Service becomes available to the Customer. If two or more SLA events occur simultaneously, the SLA event with the longest duration will be used to determine the total number of minutes for which the service was unavailable. @@ -58,13 +58,13 @@ This SERVICE LEVEL AGREEMENT ("**SLA**") is by and between **Stakater** and you - 6.1.9 A denial of service or port attack; - - 6.1.10 Customer’s intentional acts, errors, or omissions; + - 6.1.10 Customer's intentional acts, errors, or omissions; - - 6.1.11 Customer’s use of the Service after Stakater has advised Customer to modify Customer’s use of the Service, if Customer did not modify Customer’s use as advised; + - 6.1.11 Customer's use of the Service after Stakater has advised Customer to modify Customer's use of the Service, if Customer did not modify Customer's use as advised; - 6.1.12 Faulty input, instructions, or arguments (for example, requests to access files that do not exist); - - 6.1.13 Customer’s attempts to perform operations that exceed prescribed quotas or that resulted from Stakater’s throttling of suspected abusive behaviour; + - 6.1.13 Customer's attempts to perform operations that exceed prescribed quotas or that resulted from Stakater's throttling of suspected abusive behaviour; - 6.1.14 Issues that affect only the Customer and related to external apps or third-parties; @@ -76,7 +76,7 @@ This SERVICE LEVEL AGREEMENT ("**SLA**") is by and between **Stakater** and you - 6.1.18 Any fault period during which Service is suspended under provision in this SLA. -- 6.2 Customer is solely responsible for obtaining appropriate hardware and internet access to use the Service. The Service shall not be deemed unavailable due to Customer’s inadequate or incompatible hardware and internet access. +- 6.2 Customer is solely responsible for obtaining appropriate hardware and internet access to use the Service. The Service shall not be deemed unavailable due to Customer's inadequate or incompatible hardware and internet access. ## 7. Monthly Uptime Percentage @@ -94,7 +94,7 @@ This SERVICE LEVEL AGREEMENT ("**SLA**") is by and between **Stakater** and you - 8.4 A Financial Credit will be applicable and issued only if the Financial Credit amount for the applicable monthly billing cycle is greater than One US Dollar ($1). -- 8.5 If Stakater determines that a Financial Credit is owed to Customer, Stakater will apply the Financial Credit to Customer’s next invoice. At Stakater’s discretion, Stakater may issue the Financial Credit to the credit card Customer used to pay for the billing cycle in which the Service did not meet the SLO. +- 8.5 If Stakater determines that a Financial Credit is owed to Customer, Stakater will apply the Financial Credit to Customer's next invoice. At Stakater's discretion, Stakater may issue the Financial Credit to the credit card Customer used to pay for the billing cycle in which the Service did not meet the SLO. ## 9. Maximum Financial Credits @@ -112,7 +112,7 @@ This SERVICE LEVEL AGREEMENT ("**SLA**") is by and between **Stakater** and you - 10.3.2 Have materially breached this SLA; or - - 10.3.3 Have invoices totalling more than One Hundred US Dollar ($100) that are more than thirty (30) days past due at either the time of Customer’s request or the time the Financial Credit is to be applied. + - 10.3.3 Have invoices totalling more than One Hundred US Dollar ($100) that are more than thirty (30) days past due at either the time of Customer's request or the time the Financial Credit is to be applied. - 10.4 Financial Credits will not entitle Customer to any refund or other payment from Stakater. @@ -120,7 +120,7 @@ This SERVICE LEVEL AGREEMENT ("**SLA**") is by and between **Stakater** and you - 10.6 Customer may not unilaterally offset the fees due from Customer for any performance or availability issues. -- 10.7 Stakater’s failure to meet the SLO or any failure by Stakater to provide uninterrupted service does not constitute a breach of contract. Unless otherwise provided in the SLA, Customer’s sole and exclusive remedy for any unavailability, non-performance, or other failure by Stakater to provide the Service is the receipt of a Financial Credit (if eligible) in accordance with this SLA. +- 10.7 Stakater's failure to meet the SLO or any failure by Stakater to provide uninterrupted service does not constitute a breach of contract. Unless otherwise provided in the SLA, Customer's sole and exclusive remedy for any unavailability, non-performance, or other failure by Stakater to provide the Service is the receipt of a Financial Credit (if eligible) in accordance with this SLA. - 10.8 Under no circumstances will any tests performed by Customer, its vendors or partners be recognized by Stakater as a valid measurable criterion of violation length, quality or type for the purposes of establishing a Financial Credit. @@ -130,13 +130,13 @@ This SERVICE LEVEL AGREEMENT ("**SLA**") is by and between **Stakater** and you - 11.2 Stakater must be able to reproduce errors in Issues to resolve them. The Customer will cooperate and work closely with Stakater to reproduce errors, including conducting diagnostic or troubleshooting activities as requested and appropriate. Customer will make its resources available and reasonably cooperate with Stakater to help resolve the Issue. -- 11.3 Stakater is not responsible for comprehensive monitoring of Customer’s data or use of the Service; this responsibility lies with the Customer. Stakater will review the data or circumstances related to the Issue as reported by the Customer. +- 11.3 Stakater is not responsible for comprehensive monitoring of Customer's data or use of the Service; this responsibility lies with the Customer. Stakater will review the data or circumstances related to the Issue as reported by the Customer. ## 12. Service Improvements - 12.1 Stakater will make available to customers new versions, releases, and updates to the Service to solve defects or errors, keep the Service up-to-date with market developments, or otherwise improve the Service. Stakater will only support the most recent version of the Service. -- 12.2 New versions, releases, or updates will contain at least the level of functionality as set out in this SLA and as contained in the version or release of the Service previously used by Customer, and will not otherwise negatively impact Customer’s use of the Service. Stakater shall make reasonable efforts to ensure that when performing such actions, the impact on Customer and its customer(s) is limited. +- 12.2 New versions, releases, or updates will contain at least the level of functionality as set out in this SLA and as contained in the version or release of the Service previously used by Customer, and will not otherwise negatively impact Customer's use of the Service. Stakater shall make reasonable efforts to ensure that when performing such actions, the impact on Customer and its customer(s) is limited. ## 13. SLA Exclusions @@ -162,11 +162,11 @@ The SLA does not apply to any: - 13.5.5 that resulted from cluster nodes running out of capacity; - - 13.5.6 that resulted from Customer unauthorised action or lack of action when required, or from Customer employees, agents, contractors, or vendors, or anyone gaining access to Stakater’s solution by means of Customer passwords or equipment, or otherwise resulting from Customer failure to follow appropriate security practices; + - 13.5.6 that resulted from Customer unauthorised action or lack of action when required, or from Customer employees, agents, contractors, or vendors, or anyone gaining access to Stakater's solution by means of Customer passwords or equipment, or otherwise resulting from Customer failure to follow appropriate security practices; - 13.5.7 that resulted from faulty input, instructions, or arguments (for example, requests to access files that do not exist); - - 13.5.8 that resulted from Customer attempts to perform operations that exceed prescribed quotas or allowed permissions or that resulted from Stakater’s throttling of suspected abusive behaviour; + - 13.5.8 that resulted from Customer attempts to perform operations that exceed prescribed quotas or allowed permissions or that resulted from Stakater's throttling of suspected abusive behaviour; - 13.5.9 that resulted from Customer attempts to perform operations on the account or subscription being managed by Stakater, even though Customer has permissions they should treat them as read only; diff --git a/content/managed-addons/argocd/overview.md b/content/managed-addons/argocd/overview.md index 614152ee..b0c44ee8 100644 --- a/content/managed-addons/argocd/overview.md +++ b/content/managed-addons/argocd/overview.md @@ -1,6 +1,6 @@ # ArgoCD -We use GitOps to continuously deliver application changes +SAAP uses GitOps to continuously deliver application changes. [Argo CD](https://argoproj.github.io/argo-cd/) is a declarative, GitOps continuous delivery tool for Kubernetes. The deployment environment is a namespace in a container platform. diff --git a/content/managed-addons/monitoring-stack/app-alerts.md b/content/managed-addons/monitoring-stack/app-alerts.md index 89dd5f69..84d84078 100644 --- a/content/managed-addons/monitoring-stack/app-alerts.md +++ b/content/managed-addons/monitoring-stack/app-alerts.md @@ -26,7 +26,7 @@ data: ## Excluding a user-defined project from monitoring -Individual user-defined projects can be excluded from user workload monitoring. To do so, simply add the `openshift.io/user-monitoring` label to the project’s namespace with a value of false. +Individual user-defined projects can be excluded from user workload monitoring. To do so, simply add the `openshift.io/user-monitoring` label to the project's namespace with a value of false. Add the label to the project namespace: diff --git a/content/managed-addons/monitoring-stack/stack.md b/content/managed-addons/monitoring-stack/stack.md index 4bd5db66..9384a2a8 100644 --- a/content/managed-addons/monitoring-stack/stack.md +++ b/content/managed-addons/monitoring-stack/stack.md @@ -8,7 +8,7 @@ Metrics collection and storage via Prometheus, an open-source monitoring system ## Alert Manager -Alerting/notifications via Prometheus’ Alertmanager, an open-source tool that handles alerts send by Prometheus. +Alerting/notifications via Prometheus' Alertmanager, an open-source tool that handles alerts send by Prometheus. ## Grafana diff --git a/content/managed-addons/monitoring-stack/workload-application-alerts.md b/content/managed-addons/monitoring-stack/workload-application-alerts.md index 32885c4d..aa5b0a5f 100644 --- a/content/managed-addons/monitoring-stack/workload-application-alerts.md +++ b/content/managed-addons/monitoring-stack/workload-application-alerts.md @@ -41,7 +41,7 @@ A sample AlertmanagerConfig can be configured in [Application Chart](https://git | Parameter | Description | |:---|:---| | .Values.alertmanagerConfig.enabled | Enable alertmanagerConfig for this app (Will be merged in the base config) -| .Values.alertmanagerConfig.spec.route | The Alertmanager route definition for alerts matching the resource’s namespace. It will be added to the generated Alertmanager configuration as a first-level route +| .Values.alertmanagerConfig.spec.route | The Alertmanager route definition for alerts matching the resource's namespace. It will be added to the generated Alertmanager configuration as a first-level route | .Values.alertmanagerConfig.spec.receivers | List of receivers We will use Slack as an example here. diff --git a/content/managed-addons/sonarqube/overview.md b/content/managed-addons/sonarqube/overview.md index 61a193f4..115f6e78 100644 --- a/content/managed-addons/sonarqube/overview.md +++ b/content/managed-addons/sonarqube/overview.md @@ -12,10 +12,10 @@ SonarQube [plugs into the application lifecycle management (ALM)](https://docs.s The continuous integration (CI) server integrates SonarQube into the ALM. The SonarQube solution consists of several components: The central component is the SonarQube Server, which runs the SonarScanner, processes the resulting analysis reports, stores the reports in SonarQube Database, and displays the reports in the SonarQube UI. A CI server uses a stage/goal/task in its build automation to trigger the language-specific SonarScanner to scan the code being built. Developers can view the resulting analysis report in the SonarQube UI. -**From a tester’s standpoint**, SonarQube is worth attention because it will help you pinpoint the spots where automated testing is thin or nonexistent. It may also help target manual penetration and security testing. +**From a tester's standpoint**, SonarQube is worth attention because it will help you pinpoint the spots where automated testing is thin or nonexistent. It may also help target manual penetration and security testing. -**From a developer’s standpoint**, SonarQube is worth the effort because it helps you grow as a coder. From language-specific subtleties to thread safety and resource management, SonarQube can show you what you’re getting wrong—or doing sub-optimally—and point you in the right direction for fixing it. That guidance isn’t just for the folks fresh out of school. Experienced programmers can learn from SonarQube, too, even if it’s only that their super-elegant code will be unreadable to the new guy. Plus, let’s face it; everyone has off days, and SonarQube helps coders find their goofs and fix them quickly. +**From a developer's standpoint**, SonarQube is worth the effort because it helps you grow as a coder. From language-specific subtleties to thread safety and resource management, SonarQube can show you what you're getting wrong—or doing sub-optimally—and point you in the right direction for fixing it. That guidance isn't just for the folks fresh out of school. Experienced programmers can learn from SonarQube, too, even if it's only that their super-elegant code will be unreadable to the new guy. Plus, let's face it; everyone has off days, and SonarQube helps coders find their goofs and fix them quickly. -**From a software architect’s standpoint**, SonarQube is worth the time because it helps you keep an eye on whether your cleanly delineated initial design is being degraded over time with creeping dependency cycles. It can show you whether the internal coding rules are being followed, and it can help you spot rising complexity that needs to be refactored. +**From a software architect's standpoint**, SonarQube is worth the time because it helps you keep an eye on whether your cleanly delineated initial design is being degraded over time with creeping dependency cycles. It can show you whether the internal coding rules are being followed, and it can help you spot rising complexity that needs to be refactored. -**From a project management standpoint**, SonarQube is worth the focus because testing alone isn’t enough. It can only show whether software does what it’s supposed to do: its level of external quality. On the other hand, SonarQube analyzes and fosters internal quality: whether an application will run optimally and be readily maintainable and extensible down the road. +**From a project management standpoint**, SonarQube is worth the focus because testing alone isn't enough. It can only show whether software does what it's supposed to do: its level of external quality. On the other hand, SonarQube analyzes and fosters internal quality: whether an application will run optimally and be readily maintainable and extensible down the road. diff --git a/content/managed-addons/tekton/introduction.md b/content/managed-addons/tekton/introduction.md index ba645c93..cb0000c9 100644 --- a/content/managed-addons/tekton/introduction.md +++ b/content/managed-addons/tekton/introduction.md @@ -1,38 +1,19 @@ # Tekton -Pipelines are a representation of the flow/automation in a CI/CD process. Typically, a pipeline might call out discrete - steps in the software delivery process and present them visually or via a high-level scripting language so the flow can - be manipulated. The steps might include build, unit tests, acceptance tests, packaging, documentation, reporting, and - deployment and verification phases. Well-designed pipelines help deliver better quality code faster by enabling - participants in the software delivery process to more easily diagnose and respond to feedback. +Pipelines are a representation of the flow/automation in a CI/CD process. Typically, a pipeline might call out discrete steps in the software delivery process and present them visually or via a high-level scripting language so the flow can be manipulated. The steps might include build, unit tests, acceptance tests, packaging, documentation, reporting, and deployment and verification phases. Well-designed pipelines help deliver better quality code faster by enabling participants in the software delivery process to more easily diagnose and respond to feedback. -The CI/CD pipeline is one of the best practices for teams to implement, for delivering code changes more -frequently and reliably. CI/CD pipelines embody a culture, set of operating principles, and collection of practices that -enable application development teams to deliver code changes more frequently and reliably. +The CI/CD pipeline is one of the best practices for teams to implement, for delivering code changes more frequently and reliably. CI/CD pipelines embody a culture, set of operating principles, and collection of practices that enable application development teams to deliver code changes more frequently and reliably. ## Continuous Integration -Continuous integration (CI) concerns the integration of code from potentially multiple authors into a shared source code - management (SCM) repository. Such check-ins could occur many times a day, and automation steps in such a process could - include gates or controls to expose any issues as early as possible. SCMs such as Git include workflow support to - commit to trunk, push, and merge code pull requests from multiple developers. With containers, a Git push event could - be configured to then trigger an image build event via the webhooks mechanism. +Continuous integration (CI) concerns the integration of code from potentially multiple authors into a shared source code management (SCM) repository. Such check-ins could occur many times a day, and automation steps in such a process could include gates or controls to expose any issues as early as possible. SCMs such as Git include workflow support to commit to trunk, push, and merge code pull requests from multiple developers. With containers, a Git push event could be configured to then trigger an image build event via the webhooks mechanism. ## Continuous Delivery -Once a CI strategy is in place, consideration can then move to achieving continuous delivery (CD). -This involves automating the steps required to promote the work product from one environment to the next within the -defined software development lifecycle (SDLC). Such steps could include automated testing, smoke, unit, functional, -and static code analysis and static dependency checks for known security vulnerabilities. With containers, promotion in -later stages of the SLC may merely involve the tagging of the (immutable) image to mark acceptance. -Binary promotions are also possible such that only the image is pushed (to the target registry of the new environment), -leaving source code in place. - +Once a CI strategy is in place, consideration can then move to achieving continuous delivery (CD). This involves automating the steps required to promote the work product from one environment to the next within the defined software development lifecycle (SDLC). Such steps could include automated testing, smoke, unit, functional, and static code analysis and static dependency checks for known security vulnerabilities. With containers, promotion in later stages of the SLC may merely involve the tagging of the (immutable) image to mark acceptance. Binary promotions are also possible such that only the image is pushed (to the target registry of the new environment), leaving source code in place. + ## Continuous Deployment -By convention, we can denote the special case of automated continuous delivery to production as continuous deployment (CD). - We make such a distinction because such deployments may be subject to additional governance processes and gates—for - example, deliberate human intervention to manage risk and complete sign-off procedures. We make such a distinction - because such deployments may be subject to additional governance processes. +By convention, we can denote the special case of automated continuous delivery to production as continuous deployment (CD). We make such a distinction because such deployments may be subject to additional governance processes and gates—for example, deliberate human intervention to manage risk and complete sign-off procedures. We make such a distinction because such deployments may be subject to additional governance processes. ![Continuous Integration vs Continuous Delivery vs Continuous Deployment](./images/CI-CD-CD.png) diff --git a/content/managed-addons/tekton/openshift-pipelines/deploying-delivery-pipeline.md b/content/managed-addons/tekton/openshift-pipelines/deploying-delivery-pipeline.md index 4343458e..4bcb01b0 100644 --- a/content/managed-addons/tekton/openshift-pipelines/deploying-delivery-pipeline.md +++ b/content/managed-addons/tekton/openshift-pipelines/deploying-delivery-pipeline.md @@ -4,8 +4,6 @@ ![delivery-workflow](./images/delivery-workflow.jpg) -[[toc]] - ### Pre-requisites This section provides the pre-requisite steps for this workshop diff --git a/mkdocs.yml b/mkdocs.yml index bda488ce..455e1d48 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -53,19 +53,15 @@ nav: - about/saap-key-differentiators.md - Service Definition: - about/service-definition/overview.md - - about/service-definition/account-management.md - - about/service-definition/k8s-management.md - - about/service-definition/logging.md - - about/service-definition/monitoring.md - - about/service-definition/networking.md - - about/service-definition/storage.md - about/service-definition/platform.md + - about/service-definition/monitoring.md + - about/service-definition/logging.md - about/service-definition/security.md - about/service-definition/secrets-management.md - - about/service-definition/artifacts-management.md - - about/service-definition/cicd-pipelines.md - about/service-definition/service-mesh.md - - about/service-definition/multitenancy.md + - about/service-definition/account-management.md + - about/service-definition/networking.md + - about/service-definition/storage.md - about/responsibilities.md - Cloud Providers: - about/cloud-providers/overview.md @@ -81,10 +77,10 @@ nav: - about/update-lifecycle.md - about/onboarding.md - For Administrators: - - Plan your envionment: + - Plan your environment: - for-administrators/plan-your-environment/sizing.md - - Provision your cluster: - - for-administrators/provision-your-cluster.md + - Create your cluster: + - for-administrators/create-your-cluster.md - Secure your cluster: - for-administrators/secure-your-cluster/user-access.md - for-administrators/secure-your-cluster/secure-routes.md @@ -151,7 +147,7 @@ nav: - For Developers: - Tutorials: - for-developers/tutorials/00-prepare-environment/prepare-env.md - - for-developers/tutorials/00-prepare-environment/step-by-step-guide.md + - for-developers/tutorials/00-prepare-environment/step-by-step-guide.md - for-developers/tutorials/01-access-cluster/access-cluster.md - for-developers/tutorials/02-containerize-app/containerize-app.md - for-developers/tutorials/03-package-app/package-app.md @@ -168,12 +164,12 @@ nav: - for-developers/tutorials/14-scale-app/scale-app.md - for-developers/tutorials/15-validate-auto-reload/validate-auto-reload.md - for-developers/tutorials/16-validate-down-alert/validate-down-alert.md - - for-developers/tutorials/17-add-pdb/add-pdb.md - - for-developers/tutorials/18-add-network-policy/add-network-policy.md - - for-developers/tutorials/19-backup-data/backup-data.md - - for-developers/tutorials/20-restore-data/restore-data.md - - for-developers/tutorials/21-add-env-variable/add-env-variable.md - - for-developers/tutorials/22-add-ci-pipeline/add-ci-pipeline.md + - for-developers/tutorials/17-add-pdb/add-pdb.md + - for-developers/tutorials/18-add-network-policy/add-network-policy.md + - for-developers/tutorials/19-backup-data/backup-data.md + - for-developers/tutorials/20-restore-data/restore-data.md + - for-developers/tutorials/21-add-env-variable/add-env-variable.md + - for-developers/tutorials/22-add-ci-pipeline/add-ci-pipeline.md - How-to guides: - for-developers/how-to-guides/overview.md - Explanation: @@ -181,7 +177,7 @@ nav: - for-developers/explanation/plan-your-deployment.md - for-developers/explanation/inner-vs-outer-loop.md - for-developers/explanation/inner-loop.md - - for-developers/explanation/local-development-workflow.md + - for-developers/explanation/local-development-workflow.md - For CISOs: - for-cisos/overview.md - for-cisos/policies/policies.md