diff --git a/content/about/cloud-providers/aws.md b/content/about/cloud-providers/aws.md index 5ce8ed2b..0ecc9070 100644 --- a/content/about/cloud-providers/aws.md +++ b/content/about/cloud-providers/aws.md @@ -8,8 +8,8 @@ An AWS account is needed to create and manage cluster on AWS. The following crit |Type | Limit | |------------|------------| - | Virtual Machines | Varies. The limit should be 12 initially. (Initial deployment is 3 master + 3 infra + 3 worker)| - | Regional vCPUs | The limit should be A x B x 2 , where A = no. of VMS (worker + infra + master), B = vCPUs per VM) | + | Virtual Machines | Varies. The limit should be 12 initially. (Initial deployment is 3 control plane + 3 infra + 3 worker)| + | Regional vCPUs | The limit should be A x B x 2 , where A = no. of VMS (worker + infra + control plane), B = vCPUs per VM) | | Elastic IPs (EIPs) | 5 | | Virtual Private Clouds (VPCs) | 5 | | Elastic Load Balancing (ELB/NLB) | 3 | diff --git a/content/about/cloud-providers/azure.md b/content/about/cloud-providers/azure.md index d1e64198..c7f97948 100644 --- a/content/about/cloud-providers/azure.md +++ b/content/about/cloud-providers/azure.md @@ -8,8 +8,8 @@ An Azure subscription is needed to create and manage cluster on Azure. The follo |Type | Limit | |------------|------------| - | Virtual Machines | Varies. The limit should be 12 initially. (Initial deployment is 3 master + 3 infra + 3 worker) | - | Regional vCPUs | The limit should be A x B x 2 , where A = no. of VMS (worker + infra + master), B = vCPUs per VM) | + | Virtual Machines | Varies. The limit should be 12 initially. (Initial deployment is 3 control plane + 3 infra + 3 worker) | + | Regional vCPUs | The limit should be A x B x 2 , where A = no. of VMS (worker + infra + control plane), B = vCPUs per VM) | | Public IP addresses | 5 | | Private IP Addresses | 7 | | Network Interfaces | 6 | diff --git a/content/about/cloud-providers/gcp.md b/content/about/cloud-providers/gcp.md index 1eec0726..704a1220 100644 --- a/content/about/cloud-providers/gcp.md +++ b/content/about/cloud-providers/gcp.md @@ -8,8 +8,8 @@ A GCP account is needed to create and manage cluster on GCP. The following crite |Type | Limit | |------------|------------| - | Virtual Machines | Varies. The limit should be 12 initially. (Initial deployment is 3 master + 3 infra + 3 worker)| - | Regional vCPUs | The limit should be A x B x 2 , where A = no. of VMS (worker + infra + master), B = vCPUs per VM) | + | Virtual Machines | Varies. The limit should be 12 initially. (Initial deployment is 3 control plane + 3 infra + 3 worker)| + | Regional vCPUs | The limit should be A x B x 2 , where A = no. of VMS (worker + infra + control plane), B = vCPUs per VM) | | In-use global IP addresses | 4 | | Service accounts | 5 | | Firewall Rules | 11| diff --git a/content/about/service-definition/networking.md b/content/about/service-definition/networking.md index fc4106ba..df93fd20 100644 --- a/content/about/service-definition/networking.md +++ b/content/about/service-definition/networking.md @@ -14,7 +14,7 @@ SAAP includes TLS security certificates needed for both internal and external se ## Load-balancers -SAAP is normally created via the installer provisioned infrastructure (IPI) installation method which installs operators that manage load-balancers in the customer cloud, and API load-balancers to the master nodes. Application load-balancers are created as part of creating routers and ingresses. The operators use cloud identities to interact with the cloud providers API to create the load-balancers. +SAAP is normally created via the installer provisioned infrastructure (IPI) installation method which installs operators that manage load-balancers in the customer cloud, and API load-balancers to the control plane nodes. Application load-balancers are created as part of creating routers and ingresses. The operators use cloud identities to interact with the cloud providers API to create the load-balancers. User-provisioned installation (UPI) method is also possible if extra security is needed and then you must create the API and application ingress load balancing infrastructure separately and before SAAP is installed. diff --git a/content/for-administrators/cluster-lifecycle/hibernate-your-cluster.md b/content/for-administrators/cluster-lifecycle/hibernate-your-cluster.md index 6e4a8395..25f16147 100644 --- a/content/for-administrators/cluster-lifecycle/hibernate-your-cluster.md +++ b/content/for-administrators/cluster-lifecycle/hibernate-your-cluster.md @@ -2,7 +2,7 @@ For clusters running non-critical workloads, e.g. test, development or those only utilized during business hours, it is possible to schedule Cluster Hibernation to save on cloud costs, where Pay-as-you-go cloud computing (PAYG cloud computing) model is implemented. -Cluster Hibernation automatically powers your cluster nodes (including master nodes) up or down according to your defined cron schedule. +Cluster Hibernation automatically powers your cluster nodes (including control plane nodes) up or down according to your defined cron schedule. It takes around 1-3 minutes to take your cluster offline and about 3-5 minutes to power back up depending on your cloud provider. diff --git a/content/for-administrators/plan-your-environment/sizing.md b/content/for-administrators/plan-your-environment/sizing.md index 86cb60e7..2fddfafc 100644 --- a/content/for-administrators/plan-your-environment/sizing.md +++ b/content/for-administrators/plan-your-environment/sizing.md @@ -29,7 +29,7 @@ The overall minimum resource requirements are: | Machine pool role | Minimum size (vCPU x Memory x Storage) | Minimum pool size | vCPU | Total Memory (GiB) | Total Storage (GiB) |:---|:---|---:|---:|---:|---:| -| Master | 6 x 24 x 120 | 3 | 18 | 72 | 360 | +| Control plane | 6 x 24 x 120 | 3 | 18 | 72 | 360 | | Infra | 4 x 16 x 120 | 2 | 8 | 32 | 240 | | Monitoring | 4 x 32 x 120 | 1 | 4 | 32 | 120 | | Worker | 4 x 16 x 120 | 3 | 12 | 48 | 360 | @@ -41,7 +41,7 @@ The recommended resource requirements are: | Machine pool role | Minimum size (vCPU x Memory x Storage) | Minimum pool size | vCPU | Total Memory (GiB) | Total Storage (GiB) | |:---|:---|---:|---:|---:|---:| -| Master | 6 x 24 x 120 | 3 | 18 | 72 | 360 | +| Control plane | 6 x 24 x 120 | 3 | 18 | 72 | 360 | | Infra | 4 x 16 x 120 | 2 | 8 | 32 | 240 | | Monitoring | 4 x 32 x 120 | 1 | 4 | 32 | 120 | | Logging | 4 x 16 x 120 | 1 | 4 | 16 | 120 | @@ -51,9 +51,9 @@ The recommended resource requirements are: ## Compute -### 3 x Master +### 3 x Control plane -The control plane, which is composed of master nodes, also known as the control plane, manages the SAAP cluster. The control plane nodes run the control plane. No user workloads run on master nodes. +The control plane manages the SAAP cluster. The control plane nodes run the control plane. No user workloads run on control plane nodes. ### 2 x Infra diff --git a/content/help/faqs/product.md b/content/help/faqs/product.md index c698137d..ad22d17f 100644 --- a/content/help/faqs/product.md +++ b/content/help/faqs/product.md @@ -6,7 +6,7 @@ We currently support Azure, AWS, Google, OpenStack and VMWare. ## What does Stakater Agility Platform include? -Each Stakater Agility Platform cluster comes with a fully-managed control plane (master nodes), infra nodes and application nodes. Installation, management, maintenance, and upgrades are performed by Stakater SRE. Operational services (such as logging, metrics, monitoring, etc.) are available as well and are fully managed by Stakater SRE. +Each Stakater Agility Platform cluster comes with a fully-managed control plane, infra nodes and application nodes. Installation, management, maintenance, and upgrades are performed by Stakater SRE. Operational services (such as logging, metrics, monitoring, etc.) are available as well and are fully managed by Stakater SRE. ## What is the current version of Red Hat OpenShift running in Stakater Agility Platform? diff --git a/content/help/k8s-concepts/cloud-native-app.md b/content/help/k8s-concepts/cloud-native-app.md index 5e5e2566..c578472f 100644 --- a/content/help/k8s-concepts/cloud-native-app.md +++ b/content/help/k8s-concepts/cloud-native-app.md @@ -170,7 +170,7 @@ The key to Design, Build, Release, and Run is that the process is completely eph - Well-defined process to build (e.g. compile) the application and start it (e.g. a Makefile) - Dockerfile defines ENTRYPOINT to run the application - Docker composition (docker-compose.yml) can bring up the environment for automated testing -- Cut releases on merge to master (preferred, not required); use semver +- Cut releases on merge to main (preferred, not required); use semver Stakater App Agility Platform includes managed Tekton and ArgoCD to support all sorts of CI&CD workflows. diff --git a/content/help/k8s-concepts/high-availability.md b/content/help/k8s-concepts/high-availability.md index 7ce08209..b3577b47 100644 --- a/content/help/k8s-concepts/high-availability.md +++ b/content/help/k8s-concepts/high-availability.md @@ -48,7 +48,7 @@ In the event of a complete control plane node outage, the OpenShift APIs will no All services running on infrastructure nodes are configured by Stakater to be highly available and distributed across infrastructure nodes. In the event of a complete infrastructure outage, these services will be unavailable until these nodes have been recovered. -The Kubernetes master is the main component that keeps your cluster up and running. The master stores cluster resources and their configurations in the etcd database that serves as the single point of truth for your cluster. The Kubernetes API server is the main entry point for all cluster management requests from the worker nodes to the master, or when you want to interact with your cluster resources. To protect your cluster master from a zone failure: create a cluster in a multi-zone location, which spreads the master across zones or consider setting up a second cluster in another zone. +The Kubernetes control plane is the main component that keeps your cluster up and running. The control plane stores cluster resources and their configurations in the etcd database that serves as the single point of truth for your cluster. The Kubernetes API server is the main entry point for all cluster management requests from the worker nodes to the control plane, or when you want to interact with your cluster resources. To protect your cluster control plane from a zone failure: create a cluster in a multi-zone location, which spreads the control plane across zones or consider setting up a second cluster in another zone. ### Potential failure point 4: Zone availability diff --git a/content/legal-documents/dpa.md b/content/legal-documents/dpa.md index 8f6e92c8..db2c8b58 100644 --- a/content/legal-documents/dpa.md +++ b/content/legal-documents/dpa.md @@ -1,6 +1,6 @@ # Data Processing Agreement (DPA) -`Version: 9 May 2023` +`Version: 18 September 2023` 1. **Objectives of DPA** @@ -12,7 +12,7 @@ 1. The characteristics of the Data, the categories of individuals whose data is being processed, and the duration and objectives of the processing are as follows, unless otherwise explicitly stated in the Framework Agreement: - 1. **Data type:** The processed Data includes personal master data, communication data (e.g. email, chat), registration data, documents, and other data in electronic format that the Processor processes for the Controller in connection with the main contractual services. The Controller assures that no data that requires special protection will be transferred for processing without prior agreement. + 1. **Data type:** The processed Data includes personal data, communication data (e.g. email, chat), registration data, documents, and other data in electronic format that the Processor processes for the Controller in connection with the main contractual services. The Controller assures that no data that requires special protection will be transferred for processing without prior agreement. 1. **Categorization of data subjects:** Employees, customers, suppliers, and any other individuals associated with the data controller whose data the Controller transmits to the Processor under the Framework Agreement. diff --git a/content/legal-documents/sla.md b/content/legal-documents/sla.md index 8da52304..6c4c8f09 100644 --- a/content/legal-documents/sla.md +++ b/content/legal-documents/sla.md @@ -1,6 +1,6 @@ # Service Level Agreement (SLA) -`Version: 9 May 2023` +`Version: 18 September 2023` This SERVICE LEVEL AGREEMENT ("**SLA**") is by and between **Stakater** and you ("**Customer**"). Each a "Party", and together the "Parties". @@ -180,7 +180,7 @@ Payment is due once during a Service Period and the Customer will be charged for - "**Covered Service**" means, for each of Zonal Clusters and Regional Clusters, the OpenShift API provided by Customer's cluster(s), so long as the version of OpenShift Engine deployed in the cluster is a version currently offered in the Stable Channel. - "**Stable Channel**" means the Red Hat OpenShift Container Platform Stable release channel. -- "**Zonal Cluster**" means a single-Zone cluster with control planes (master) running in one Zone (data centre). +- "**Zonal Cluster**" means a single-Zone cluster with control planes running in one Zone (data centre). - "**Regional Cluster**" means a cluster topology that consists of three replicas of the control plane, running in multiple Zones within a given Region. ## Service Level Objectives diff --git a/content/managed-addons/nexus/explanation/permissions.md b/content/managed-addons/nexus/explanation/permissions.md index 5fa73cbf..9b00c67b 100644 --- a/content/managed-addons/nexus/explanation/permissions.md +++ b/content/managed-addons/nexus/explanation/permissions.md @@ -30,7 +30,7 @@ Machine user interacts with nexus using API or CLI and we are using nexus local Here is machine users list: 1. `helm-user`: is able to use with OpenShift service DNS (public link is not available) -1. `docker-user`: is able to use with the dedicated route for docker registry. Because the docker client does not allow a context as part of the path to a registry, a specific and separate port is used for docker registry. And also to use the docker registry at the node level (kubelet) the docker registry should be exposed. So we use a route which has the OpenShift cluster gateway IP in the whitelist. +1. `docker-user`: is able to use with the dedicated route for docker registry. Because the docker client does not allow a context as part of the path to a registry, a specific and separate port is used for docker registry. And also to use the docker registry at the node level (kubelet) the docker registry should be exposed. So we use a route which has the OpenShift cluster gateway IP in the allow-list. `mnn-users` are able to access maven2, NuGet and NPM repositories (`mnn` stands for Maven, NuGet and NPM); developers should use these users if they want to connect their package manager with nexus: diff --git a/content/managed-addons/tekton/openshift-pipelines/deploying-cicd-pipeline.md b/content/managed-addons/tekton/openshift-pipelines/deploying-cicd-pipeline.md index 77198abc..de15b7cd 100644 --- a/content/managed-addons/tekton/openshift-pipelines/deploying-cicd-pipeline.md +++ b/content/managed-addons/tekton/openshift-pipelines/deploying-cicd-pipeline.md @@ -92,15 +92,15 @@ Create a PR and verify its status ![PR Status](./images/pr-comment.png) -#### CD - Merge this PR into the master +#### CD - Merge this PR into the main -Now attempt to merge this PR in master +Now attempt to merge this PR in main -Merge PR in master +Merge PR in main -* Merge the Pull Request in the master to rigger the pipeline on master branch +* Merge the Pull Request in the main to rigger the pipeline on main branch -![Master status](./images/pr-merged.png) +![main status](./images/pr-merged.png) * After successful execution a new release and tag will be created in the `Releases` section on GitHub diff --git a/vocabulary b/vocabulary index ea19913d..6df79442 160000 --- a/vocabulary +++ b/vocabulary @@ -1 +1 @@ -Subproject commit ea19913d675f3585545df48eb6893285525c7e91 +Subproject commit 6df79442723244b60287235a6319d5d422c0b8b0