-
Notifications
You must be signed in to change notification settings - Fork 994
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert "Handle Jobs with ttl_seconds_after_finished = 0 correctly" #2650
Revert "Handle Jobs with ttl_seconds_after_finished = 0 correctly" #2650
Conversation
Thank you for your submission! We require that all contributors sign our Contributor License Agreement ("CLA") before we can accept the contribution. Read and sign the agreement Learn more about why HashiCorp requires a CLA and what the CLA includes Have you signed the CLA already but the status is still pending? Recheck it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @justinmchase,
Thank you for reverting this change and properly reporting the related issue. Indeed, it looks like a breaking change and we need to find a better way to cover both scenarios without introducing a breaking change in the minor release.
I have left a small comment. Once it is addressed we can merge this change and cut a patch release.
…r-kubernetes into revert-2596-fix-job-ttl-zero-handling
I updated the release notes files as requested and merged upstream main. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @justinmchase!
…er_finished = 0 correctly (#2650)
This change broke functionality, when job should be created once and then deleted. |
@dmpakhar Is it possible to add a separate field to get the behavior you want without regressing the behavior that is currently expected when ttl seconds after finish is 0? If it helps, I will tell you that our Job is running a migration pod, the migration pod does run on every deployment but internally it is keeping track of the state of migration steps and applying them only once. This is a pretty standard way of handling run-once behavior rather than trying to manage it with terraform and kubernetes. |
@justinmchase Now we went in same issue, when same parameter changes behaviour. I think execute_once/recreate_after_finished = true/false variable may be a good idea |
Reverts #2596
Per issue #2649, please revert this breaking change which regresses a critical feature.