-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Standardize the Cloud Controller Manager Build/Release Process #36
Comments
First thing to sort out is how to update the modules, so that Go module updates work correctly. The standard main.go has dependencies on 'k8s.io/kubernetes' and 'k8s.io/component-base' Component base isn't semantically versioned properly, and fetching the main kubernetes module causes a load of version failures as the staging redirect 'replace' entries in the go module file don't apply in an external structure. |
|
Thanks @NeilW! I agree that removing imports to re: k8s.io/component-base not being semantically versioned, can you open an issue in |
I've spent a day struggling with 1.15 and I've still not managed to get the dependencies sorted out for the cloud-provider. It looks like I'll have to manually code 'replace' entries for all the repos in the 'staging' area of the kubernetes repo. So we definitely have a problem. However that does open up a possibility for making cloud-providers more standard. If you built a dummy provider that responded to end to end tests, and published in a standard way, but didn't actually do anything, then you could 'replace' that provider's interface repo path with a path to a provider's repo that implements the same interface. That allows you to simply replicate the standard repo as say 'brightbox-cloud-provider' and just change the 'replace' entry in the 'go.mod' to point to say 'brightbox/brightbox-cloud-provider-interface'. Then you can follow the same automated integration testing and deployment/publishing process as the standard dummy provider. And on the interface repo that people like me maintain, we can run unit tests and set up the dependencies with our own 'go.mod' completely decoupled from the cloud-provider 'shell' the interface will be compiled into. |
In terms of a publishing process, the one I use with Hashicorp to publish our terraform provider is a good one. I go on a slack channel and ask them to roll a new release, and after a few manual checks the maintainer of the central repo holding the providers hits the go button on the automated release system. Now Hashicorp have staff managing that central provider repo(https://github.com/terraform-providers), and that may not work with k8s given the nature of the project. But it's something to consider. |
I haven't upgraded DigitalOcean's CCM to 1.15 yet, but I do remember that moving to the 1.14 deps was quite a hassle. For instance, it required adding a replace directive for apimachinery which wasn't obvious for me to spot. I noticed that the latest client-go v1.12 (corresponding to Kubernetes 1.15 as it seems) encodes these replace directives in its go.mod file now. My guess is that, if cloud-provider followed the same pattern of accurately pinning down dependencies per each release through Go modules, consumption of cloud-provider should become easier. @NeilW's idea of providing a dummy provider is interesting, though I'm not sure I fully grasped yet how that'd be consumed. In general, I'd definitely appreciate a sample provider that described the canonical way of setting up a custom cloud provider; last time I went over some of the available implementations from the different clouds, they all had slight variations, which could easily have been the case because their development cycles can't possibly be synchronized perfectly; or maybe there are legitimate reasons to have divergent setups? It'd be great to have a "source of truth" that outlines one or more recommended setups (similar to client-go's sample directory). |
I'm all in favor of removing any dependencies on What's the benefit of moving the cloud provider command part into a new, separate repository? My gut feeling is that it would be easier to reuse the existing k8s.io/cloud-provider repository we have today. |
Less that we would consume cloud-provider and more that it would consume us.
We then just build our provider interface libraries to the published Go Interface. |
In terms of updating to 1.15
Hope that saves somebody a lot of time. |
/assign @yastij |
For v1.16: consensus on what the build/release process for CCM should look like. |
A couple of things:
also I think we should start publishing binaries stripped from in-tree cloud-providers, this would help to drive adoption. cc @kubernetes/release-engineering |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/lifecycle frozen |
/help |
@andrewsykim: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@cheftako to put together short proposal for v1.19 |
Right now each provider is building/releasing the external cloud controller manager in their own way. It might be beneficial to standardize this going forward or at least set some guidelines on what is expected from a cloud controller manager build/release.
Some questions to consider:
We've had this discussion multiple times at KubeCONs and SIG calls, would be great to get some of those ideas vocalized here and formalize this in a doc going forward.
cc @cheftako @jagosan @hogepodge @frapposelli @yastij @dims @justaugustus
The text was updated successfully, but these errors were encountered: