-
Notifications
You must be signed in to change notification settings - Fork 407
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature request] Add more information about availability in the status of the yurtappset #1543
Comments
@vie-serendipity PTAL |
From what I understand, there is a need to contain information about unavailable nodepools in the status of YurtAppSet in case that one nodepool affects another. So one slice in the status of YurtAppSet may be enough to solve this problem, like NotReadyApp []string. Is it necessary to involve some other information? |
@vie-serendipity I have some ideas here, and this is now the status field: status:
...
currentRevision: edgex-redis-55d684687
observedGeneration: 1
poolReplicas:
beijing: 1
shanghai: 2
readyReplicas: 3
replicas: 3
templateType: Deployment This one has a field called status:
...
currentRevision: edgex-redis-55d684687
observedGeneration: 1
poolReplicas:
beijing: 1
shanghai: 2
poolReadyReplicas:
beijing: 1
shanghai: 1
readyReplicas: 2
replicas: 3
templateType: Deployment I think that simply exporting notReady slice may lead to insufficient information in the future. FYI. |
@vie-serendipity Of course, for our current situation, notReady slice is sufficient. You could think about whether you want to export specific quantities. |
@LavenderQAQ Given the larger number of nodepools, listing all nodepools may be overloaded. So just focusing on the unavailable nodepools may be better? Is it necessary to show detailed replicas, readyReplicas, and some condition information of unavailable nodepools? |
@vie-serendipity Indeed, for our iot side, just listing notReady is enough for us. |
@vie-serendipity But there seems to be something trouble with the slice structure. If we're using it and we get a slice in the yurtappset and we want to quickly determine if a nodepool is ready, we need to do |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@vie-serendipity Whether this issue can be turned off? |
@LavenderQAQ yeah |
What would you like to be added:
IoT-sig currently deploys PlatformAdmin through yurtappset. We use yurtappset to distribute various components required by edgex framework to multiple nodepools. PlatformAdmin and yurtappset form a many-to-many relationship. But at the same time this causes a problem, when the user is using multiple PlatformAdmin, we need to know the deployment of each PlatformAdmin, and the status provided by the current yurtappset is not satisfied.
For this, we may need some support from yurtappset. We hope yurtappset can output whether the component of a nodepool is ready in status (for example, the component of Beijing nodepool is Ready, and the component of Shanghai nodepool is NotReady).
Why is this needed:
Let me take a simple example, suppose there are two PlatformAdmin, one belongs to beijing nodepool and the other belongs to shanghai nodepool, and they need to deploy two components redis and consul respectively. Below is a diagram of their deployment:
PlatformAdmin also needs to know which components have been successfully deployed, as shown in the following figure:
Next, we assume that the redis of shanghai nodepool suddenly crashes, and then the readyReplicas of the upper yurtappset will become 1. However, since we cannot know from the yurtappset which nodepool's redis failed, so we cannot find PlatformAdmin where deployment failed. At present, the code logic of IoT-sig will cause both PlatformAdmin to become NotReady (in fact, beijing PlatformAdmin is normal), which I think is very unreasonable. A problem with one nodepool component cannot affect the state of another nodepool. This problem also existed in the previous yurt-edgex-manager, see [BUG] Edgex READYDEPLOYMENT quantity in the case of multiple Edgex instance has a cascading effect.
In addition, when I deploy a new PlatformAdmin, a similar cascading problem occurs, because I don't know which nodepool component is deployed, and all the PlatformAdmin states.
others
I have considered directly getting the status of the deployment under yurtappset, but this may bring a lot of extra visits to PlatformAdmin, and currently PlatformAdmin does not have any code to control the deployment, I am not inclined to this approach.
/kind feature
The text was updated successfully, but these errors were encountered: