-
Notifications
You must be signed in to change notification settings - Fork 133
Add support for Preservation of Machines and Backing nodes #1059
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
@thiyyakat You need rebase this pull request with latest master branch. Please check. |
06ecf58 to
89f2900
Compare
|
Questions that remain unanswered:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: A review meeting was held today for this PR. The comments were given during the meeting.
During the meeting, we revisited the decision to move drain to Failed state for preserved machine. The reason discussed previously was that it didn't make sense semantically to move the machine to Terminating and then do the drain, because there is a possibility that the machine may recover. Since Terminating is a final state, the drain (separate from the drain in triggerDeletionFlow) will be performed in Failed phase. There was no change proposed during the meeting. This design decision was only reconfirmed.
takoverflow
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have only gone through half of the PR, have some suggestions PTAL.
| err := nodeops.AddOrUpdateConditionsOnNode(ctx, c.targetCoreClient, nodeName, preservedCondition) | ||
| if err != nil { | ||
| return err | ||
| } | ||
| // Step 2: remove CA's scale-down disabled annotations to allow CA to scale down node if needed | ||
| CAAnnotations := make(map[string]string) | ||
| CAAnnotations[autoscaler.ClusterAutoscalerScaleDownDisabledAnnotationKey] = "" | ||
| latestNode, err := c.targetCoreClient.CoreV1().Nodes().Get(ctx, nodeName, metav1.GetOptions{}) | ||
| if err != nil { | ||
| klog.Errorf("error trying to get backing node %q for machine %s. Retrying, error: %v", nodeName, machine.Name, err) | ||
| return err | ||
| } | ||
| latestNodeCopy := latestNode.DeepCopy() | ||
| latestNodeCopy, _, _ = annotations.RemoveAnnotation(latestNodeCopy, CAAnnotations) // error can be ignored, always returns nil | ||
| _, err = c.targetCoreClient.CoreV1().Nodes().Update(ctx, latestNodeCopy, metav1.UpdateOptions{}) | ||
| if err != nil { | ||
| klog.Errorf("Node UPDATE failed for node %q of machine %q. Retrying, error: %s", nodeName, machine.Name, err) | ||
| return err | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason why there are two get and update calls made for a node, can these not be combined into a single atomic node object update?
And I know this is not part of your PR but can we update this RemoveAnnotation function, it's needlessly complicated.
All you have to do after fetching the object and checking that annotations are non-nil is
delete(obj.Annotations, annotationKey)Creating a dummy annotation map, then passing it and then creating a new map which doesn't have the key. All of this complication can be avoided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By 2 Get() calls are you referring to the call within AddOrUpdateConditionsOnNode and the following Get() here:
latestNode, err := c.targetCoreClient.CoreV1().Nodes().Get(ctx, nodeName, metav1.GetOptions{})?
The first one can be avoided if we didn't use the function. The second one is required because step 1 adds conditions to the node object, and the function does not return the updated node object. Fetching from the cache doesn't guarantee an up-to-date node object (tested this out empirically). I could potentially avoid fetching the objects if I didn't use the function. Will test it out.
The two update calls cannot be combined since step 1 requires an UpdateStatus() call, and step 2 updates the Spec, and requires an Update() call.
I will update the RemoveAnnotation function as recommended by you.
Edit: The RemoveAnnotation function returns a boolean indicating whether or not an update is needed. This value is being used in other usages of the function. The function cannot be updated. I will use your suggestion instead of using the function since the boolean value is not required in this case.
22c646e to
7c062b5
Compare
# Conflicts: # pkg/util/provider/machinecontroller/machine.go # pkg/util/provider/machinecontroller/machine_util.go
# Conflicts: # pkg/util/provider/machinecontroller/machine.go
* remove use of machineStatusUpdate in machine preservation code since it uses a similarity check * introduce check of phase change in updateMachine() to initiate drain of preserved machine on failure. This check is only for preserved machines
takoverflow
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some minor comments, will take a proper look once the annotation logic's revised.
* remove auto preservation logic from manageReplicas() * rename constants * simplify preserved Running machine switch from preserve=now to preserve=when-failed * update tests
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
4c9f267 to
0c9890f
Compare
takoverflow
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still going through the PR, will be adding more comments later.
…e accessing maps.
gagan16k
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have a few comments, PTAL
- Modify sort function to de-prioritize preserve machines - Add test for the same - Improve logging - Fix bug in stopMachinePreservationIfPreserved when node is not found - Update default MachinePreserveTimeout to 3 days as per doc
- Reuse function to write annotation on machine - Minor refactoring
Manual Testing carried out with MCM-P-virtual
Annotating node objectAnnotating with "preserve=now"
Machine object: Annotating with "preserve=false"
Machine object: Annotated "preserve=when-failed"When machine is
Machine object: When machine has
|
- Make changes to add auto-preserve-stopped on recovered, auto-preserved previously failed machines. - Change stopMachinePreservationIfPreserved to removeCA annotation when preserve=false on a recovered failed, preserved machine
What this PR does / why we need it:
This PR introduces a feature that allows operators and endusers to preserve a machine/node and the backing VM for diagnostic purposes.
The expected behaviour, use cases and usage are detailed in the proposal that can be found here
Which issue(s) this PR fixes:
Fixes #1008
Special notes for your reviewer:
The following tests were carried out serially with the machine-controller-manager-provider-virtual: #1059 (comment)
Please also take a look at the questions asked here.
Release note: