Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: fix doc typos and minor changes #614

Merged
merged 1 commit into from
Nov 15, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,8 @@ NAME JOINED AGE
kind-cluster-1 True 25m
```

Now we can go ahead and use the workload orchestration capabilities offered by fleet, please follow the link which details the various features offered https://github.com/Azure/fleet/tree/main/docs/howtos
Now we can go ahead and use the workload orchestration capabilities offered by fleet, please start with the [concept](https://github.com/Azure/fleet/tree/main/docs/concepts/README.md) to
understand the details of various features offered by fleet.

## Code of Conduct

Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/Scheduler/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ controller updates the label of the existing `ClusterSchedulingPolicySnapshot` i
the scheduler won't move any existing resources that are already scheduled and just fulfill the new requirement.

2. The following cluster changes trigger scheduling:
* a cluster, originally ineligible for resource placement for some reasons, becomes eligible, such as:
* a cluster, originally ineligible for resource placement for some reason, becomes eligible, such as:
* the cluster setting changes, specifically `MemberCluster` labels has changed
* an unexpected deployment which originally leads the scheduler to discard the cluster (for example, agents not joining,
networking issues, etc.) has been resolved
Expand Down
6 changes: 3 additions & 3 deletions docs/howtos/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# Fleet How-To Guides

The Fleet documentation provides a number of how-to guides that help you get familar with
The Fleet documentation provides a number of how-to guides that help you get familiar with
specific Fleet tasks, such as how to use `ClusterResourcePlacement`, a Fleet API, to place
resources across different clusters.

> Note
>
> If you are just getting started with Fleet, it is recommended that you refer to the
> [Fleet Getting Started Tutorials](../getting-started/README.md) for an overview of Fleet
> [Fleet Getting Started Guide](../../README.md) for how to create a fleet. an overview of Fleet
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and please refer to the concept read me too

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a redundant period?

> features and capabilities.

Below is a walkthrough of all the how-to guides currently available, categorized by their
Expand All @@ -19,7 +19,7 @@ domains:

This how-to guide explains the specifics of the `ClusterResourcePlacement` API, including its
resource selectors, scheduling policy, rollout strategy, and more. `ClusterResourcePlacement`
is a core Fleet API that allows easy and flexibile distribution of resources to clusters.
is a core Fleet API that allows easy and flexible distribution of resources to clusters.

* [Using Affinity to Pick Clusters](affinities.md)

Expand Down
18 changes: 9 additions & 9 deletions docs/howtos/affinities.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ for resource placement.
Affinities terms are featured in the `ClusterResourcePlacement` API, specifically the scheduling
policy section. Each affinity term is particular requirement that Fleet will check against clusters;
and the fulfillment of this requirement (or the lack of) would have certain effect on whether
Fleet would pick a cluster for ressource placement.
Fleet would pick a cluster for resource placement.

Fleet currently supports two types of affinity terms:

Expand Down Expand Up @@ -39,14 +39,14 @@ spec:
placementType: PickAll
affinity:
clusterAffinity:
requiredDuringSchedulingIgnoredDuringExection:
requiredDuringSchedulingIgnoredDuringExecution:
clusterSelectorTerms:
- labelSelector:
matchLabels:
system: critical
```

The example above inlcudes a `requiredDuringSchedulingIgnoredDuringExecution` term which requires
The example above includes a `requiredDuringSchedulingIgnoredDuringExecution` term which requires
that the label `system=critical` must be present on a cluster before Fleet can pick it for the
`ClusterResourcePlacement`.

Expand Down Expand Up @@ -151,7 +151,7 @@ spec:
placementType: PickAll
affinity:
clusterAffinity:
requiredDuringSchedulingIgnoredDuringExection:
requiredDuringSchedulingIgnoredDuringExecution:
clusterSelectorTerms:
- labelSelector:
matchLabels:
Expand Down Expand Up @@ -201,7 +201,7 @@ spec:
numberOfClusters: 10
affinity:
clusterAffinity:
preferredDuringSchedulingIgnoredDuringExection:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 20
preference:
labelSelector:
Expand Down Expand Up @@ -229,7 +229,7 @@ spec:
numberOfClusters: 10
affinity:
clusterAffinity:
preferredDuringSchedulingIgnoredDuringExection:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 20
preference:
labelSelector:
Expand All @@ -239,7 +239,7 @@ spec:
preference:
labelSelector:
matchLabels:
environent: prod
environment: prod
```

Cluster will be validated against each affinity term individually; the affinity scores it
Expand Down Expand Up @@ -273,13 +273,13 @@ spec:
numberOfClusters: 10
affinity:
clusterAffinity:
requiredDuringSchedulingIgnoredDuringExection:
requiredDuringSchedulingIgnoredDuringExecution:
clusterSelectorTerms:
- labelSelector:
matchExpressions:
- key: system
- operator: Exists
preferredDuringSchedulingIgnoredDuringExection:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 20
preference:
labelSelector:
Expand Down
18 changes: 9 additions & 9 deletions docs/howtos/crp.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The API, generally speaking, consists of the following parts:
* one or more resource selectors, which specify the set of resources to select for placement; and
* a scheduling policy, which determines the set of clusters to place the resources at; and
* a rollout strategy, which controls the behavior of resource placement when the resources
themselves and/or the scheduling policy are updated, so as to minimize interruptions caused
themselves and/or the scheduling policy are updated, to minimize interruptions caused
by refreshes

The sections below discusses the components in depth.
Expand Down Expand Up @@ -201,7 +201,7 @@ spec:
placementType: PickAll
affinity:
clusterAffinity:
requiredDuringSchedulingIgnoredDuringExection:
requiredDuringSchedulingIgnoredDuringExecution:
clusterSelectorTerms:
- labelSelector:
matchLabels:
Expand Down Expand Up @@ -234,7 +234,7 @@ of a label.
* `requiredDuringSchedulingIgnoredDuringExecution` terms are requirements that a cluster
must meet before it can be picked; and
* `preferredDuringSchedulingIgnoredDuringExecution` terms are requirements that, if a
cluster meets, will set Fleet to prioritze it in scheduling.
cluster meets, will set Fleet to prioritize it in scheduling.

* A topology spread constraint can help you spread resources evenly across different groups
of clusters. For example, you may want to have a database replica deployed in each region
Expand All @@ -255,19 +255,19 @@ spec:
resourceSelectors:
- ...
policy:
placementType: PickAll
placementType: PickN
numberOfClusters: 3
affinity:
clusterAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
weight: 20
perference:
preference:
- labelSelector:
matchLabels:
critial-level: 1
critical-level: 1
```

The `ClusterResourcePlacement` object above will pick first clusters with the `critial-level=1`
The `ClusterResourcePlacement` object above will pick first clusters with the `critical-level=1`
on it; if only there are not enough (less than 3) such clusters, will Fleet pick clusters with no
such label.

Expand Down Expand Up @@ -308,10 +308,10 @@ keep looking until all N clusters are found.
Note that Fleet will stop looking once all N clusters are found, even if there appears a
cluster that scores higher.

#### Upscaling and downscaling
#### Up-scaling and downscaling

You can edit the `numberOfClusters` field in the scheduling policy to pick more or less clusters.
When upscaling, Fleet will score all the clusters that have not been picked earlier, and find
When up-scaling, Fleet will score all the clusters that have not been picked earlier, and find
the most appropriate ones; for downscaling, Fleet will unpick the clusters that ranks lower
first.

Expand Down
4 changes: 2 additions & 2 deletions docs/howtos/topology-spread-constraints.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ You can use topology spread constraints to, for example:
* achieve high-availability for your database backend by making sure that there is at least
one database replica in each region; or
* verify if your application can support clusters of different configurations; or
* eliminate resource utilization hotspots in your infrastruction through spreading jobs
* eliminate resource utilization hotspots in your infrastructure through spreading jobs
evenly across sections.

## Specifying a topology spread constraint
Expand Down Expand Up @@ -44,7 +44,7 @@ groups.

This is a required field.

* `maxSkew` specifies how **unevenly** resource placments are spread in your fleet.
* `maxSkew` specifies how **unevenly** resource placements are spread in your fleet.

The skew of a set of resource placements are defined as the different in count of
resource placements between the group with the most and the group with
Expand Down