Skip to content

Conversation

@cybertron
Copy link
Member

Previously the SIGTERM that is sent when kubelet shuts down the pod was being ignored, which caused keepalived to be SIGKILLed and not able to send the priority 0 message so another node takes the VIP immediately. This caused ~5 seconds of delay when keepalived and haproxy were restarted during upgrades, and if this happened on a node where the local apiserver is also unavailable it causes a temporary API outage.

There appears to have been two reasons for this:

  1. The socat call blocks SIGTERM handling in the bash script 2) Sending SIGTERM to keepalived without waiting for it to complete
    can cause the container to exit before priority 0 is sent

Using the "wait" command seems to make this work as expected. Now socat is started in the background and wait is used to keep the script from exiting. A delay is also added to allow keepalived time to shut down cleanly.

Timeouts are not needed for any of these calls because kubelet will already send SIGKILL after the grace period expires.

- What I did

- How to verify it

- Description for the changelog

Previously the SIGTERM that is sent when kubelet shuts down the pod
was being ignored, which caused keepalived to be SIGKILLed and not
able to send the priority 0 message so another node takes the VIP
immediately. This caused ~5 seconds of delay when keepalived and
haproxy were restarted during upgrades, and if this happened on a
node where the local apiserver is also unavailable it causes a
temporary API outage.

There appears to have been two reasons for this:
1) The socat call blocks SIGTERM handling in the bash script
2) Sending SIGTERM to keepalived without waiting for it to complete
   can cause the container to exit before priority 0 is sent

Using the "wait" command seems to make this work as expected. Now
socat is started in the background and wait is used to keep the
script from exiting. A delay is also added to allow keepalived time
to shut down cleanly.

Timeouts are not needed for any of these calls because kubelet will
already send SIGKILL after the grace period expires.
@openshift-ci-robot openshift-ci-robot added jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Nov 10, 2025
@openshift-ci-robot
Copy link
Contributor

@cybertron: This pull request references Jira Issue OCPBUGS-59925, which is invalid:

  • expected the bug to target the "4.21.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

Previously the SIGTERM that is sent when kubelet shuts down the pod was being ignored, which caused keepalived to be SIGKILLed and not able to send the priority 0 message so another node takes the VIP immediately. This caused ~5 seconds of delay when keepalived and haproxy were restarted during upgrades, and if this happened on a node where the local apiserver is also unavailable it causes a temporary API outage.

There appears to have been two reasons for this:

  1. The socat call blocks SIGTERM handling in the bash script 2) Sending SIGTERM to keepalived without waiting for it to complete
    can cause the container to exit before priority 0 is sent

Using the "wait" command seems to make this work as expected. Now socat is started in the background and wait is used to keep the script from exiting. A delay is also added to allow keepalived time to shut down cleanly.

Timeouts are not needed for any of these calls because kubelet will already send SIGKILL after the grace period expires.

- What I did

- How to verify it

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@cybertron
Copy link
Member Author

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Nov 10, 2025
@openshift-ci-robot
Copy link
Contributor

@cybertron: This pull request references Jira Issue OCPBUGS-59925, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.21.0) matches configured target version for branch (4.21.0)
  • bug is in the state New, which is one of the valid states (NEW, ASSIGNED, POST)
Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested review from emy and gryf November 10, 2025 17:43
@cybertron
Copy link
Member Author

/test e2e-metal-ipi-ovn-dualstack

@cybertron
Copy link
Member Author

/test e2e-metal-ipi-ovn-dualstack

Failed on ovnk, probably not related but would still like to see it green.

if pid=$(pgrep -o keepalived); then
kill -s SIGTERM "$pid"
# Give keepalived time to shut down
while pgrep -o keepalived; do sleep 1; done
Copy link
Contributor

@rbbratta rbbratta Nov 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

killall -o -w -s SIGTERM keepalived maybe?

  -o,--older-than     kill processes older than TIME
  -w,--wait           wait for processes to die

or do we need the extra sleeps?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nevermind. killall doesn't seem to match pgrep -o. Could pkill though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or could we do wait $pid it's a child process?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, we don't want to be fast, we want to sleep.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Talked about this with Ross offline, but documenting here for future reference:

The reason I'm looping on pgrep instead of using wait is that sometimes the pid we get back from keepalived isn't a child of the main script and wait will fail. I suspect that may be a bug in itself - we use -o to get the oldest pid, which would presumably be the parent keepalived process that was started by the main script, but it seems that isn't always true. It may be that the oldest pid somehow isn't always the main keepalived process.

In any case, another advantage of using pgrep is we will wait until all of the keepalived processes have exited so we should know for certain that priority 0 was sent by the time that completes. It's a bit inelegant, but it seems to be the safest way to handle this.

@rbbratta
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Nov 17, 2025
@rbbratta
Copy link
Contributor

/verified by @rbbratta

@openshift-ci-robot openshift-ci-robot added the verified Signifies that the PR passed pre-merge verification criteria label Nov 19, 2025
@openshift-ci-robot
Copy link
Contributor

@rbbratta: This PR has been marked as verified by @rbbratta.

Details

In response to this:

/verified by @rbbratta

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@cybertron
Copy link
Member Author

/assign @umohnani8

@umohnani8
Copy link
Contributor

/approved

@umohnani8
Copy link
Contributor

/approve

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 2, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: cybertron, rbbratta, umohnani8

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 2, 2025
@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD 4a64f2f and 2 for PR HEAD ea2fdc3 in total

@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD 8e6beb0 and 1 for PR HEAD ea2fdc3 in total

@cybertron
Copy link
Member Author

/retest-required

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 4, 2025

@cybertron: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/bootstrap-unit ea2fdc3 link false /test bootstrap-unit
ci/prow/okd-scos-e2e-aws-ovn ea2fdc3 link false /test okd-scos-e2e-aws-ovn
ci/prow/e2e-openstack ea2fdc3 link false /test e2e-openstack

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@cybertron
Copy link
Member Author

/label acknowledge-critical-fixes-only
/retest-required

This is a fix to improve test stability, so I think it's valid to go in right now.

@openshift-ci openshift-ci bot added the acknowledge-critical-fixes-only Indicates if the issuer of the label is OK with the policy. label Dec 4, 2025
@openshift-merge-bot openshift-merge-bot bot merged commit e206063 into openshift:main Dec 4, 2025
14 of 17 checks passed
@openshift-ci-robot
Copy link
Contributor

@cybertron: Jira Issue Verification Checks: Jira Issue OCPBUGS-59925
✔️ This pull request was pre-merge verified.
✔️ All associated pull requests have merged.
✔️ All associated, merged pull requests were pre-merge verified.

Jira Issue OCPBUGS-59925 has been moved to the MODIFIED state and will move to the VERIFIED state when the change is available in an accepted nightly payload. 🕓

Details

In response to this:

Previously the SIGTERM that is sent when kubelet shuts down the pod was being ignored, which caused keepalived to be SIGKILLed and not able to send the priority 0 message so another node takes the VIP immediately. This caused ~5 seconds of delay when keepalived and haproxy were restarted during upgrades, and if this happened on a node where the local apiserver is also unavailable it causes a temporary API outage.

There appears to have been two reasons for this:

  1. The socat call blocks SIGTERM handling in the bash script 2) Sending SIGTERM to keepalived without waiting for it to complete
    can cause the container to exit before priority 0 is sent

Using the "wait" command seems to make this work as expected. Now socat is started in the background and wait is used to keep the script from exiting. A delay is also added to allow keepalived time to shut down cleanly.

Timeouts are not needed for any of these calls because kubelet will already send SIGKILL after the grace period expires.

- What I did

- How to verify it

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-merge-robot
Copy link
Contributor

Fix included in accepted release 4.21.0-0.nightly-2025-12-08-112148

@cybertron
Copy link
Member Author

/cherry-pick release-4.20

Spot checking some ci results, it looks like this has had the intended effect. Now instead of 6 seconds of disruption, we get 1 second (likely just a blip when the VIP fails over, which we can't reasonably fix with a cluster-hosted LB).

@openshift-cherrypick-robot

@cybertron: new pull request created: #5507

Details

In response to this:

/cherry-pick release-4.20

Spot checking some ci results, it looks like this has had the intended effect. Now instead of 6 seconds of disruption, we get 1 second (likely just a blip when the VIP fails over, which we can't reasonably fix with a cluster-hosted LB).

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

acknowledge-critical-fixes-only Indicates if the issuer of the label is OK with the policy. approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. verified Signifies that the PR passed pre-merge verification criteria

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants