Skip to content

Conversation

@bshephar
Copy link
Contributor

@bshephar bshephar commented Oct 31, 2024

This change builds the Operator bundle with ENABLE_WEBHOOKS=false. This is due to the duplication of webhook logic since openstack-operator imports and runs this webhook at the OpenStackControlPlane level. Therefor, having the webhook run by the service operator is a needless duplication of this webhook logic.

Jira: https://issues.redhat.com/browse/OSPRH-11198
Depends-On: openstack-k8s-operators/openstack-k8s-operators-ci#114

@openshift-ci openshift-ci bot requested review from frenzyfriday and slagle October 31, 2024 12:55
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 31, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: bshephar

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/e6ee4d6e038742faa0358b058025f546

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 01m 08s
heat-operator-tempest-multinode FAILURE in 1h 11m 45s

@bshephar
Copy link
Contributor Author

bshephar commented Nov 1, 2024

Yeah, so as we discussed on the call. This will break Kuttl tests. So we need to be able to dynamically set this for CI jobs - which is ideal since we do want to test web hooks in CI. But, ultimately, the bundle we build and ship should have the service level operator web hooks disabled so that they can be executed by the openstack-operator without duplication.

❯ curl -s "https://storage.googleapis.com/test-platform-results/pr-logs/pull/openstack-k8s-operators_heat-operator/466/pull-ci-openstack-k8s-operators-heat-operator-main-heat-operator-build-deploy-kuttl/1852209004751622144/build-log.txt" | grep "failed to call webhook"
    case.go:380: Internal error occurred: failed calling webhook "mheat.kb.io": failed to call webhook: Post "https://heat-operator-controller-manager-service.openstack-operators.svc:443/mutate-heat-openstack-org-v1beta1-heat?timeout=10s": dial tcp 10.128.0.75:9443: connect: connection refused
    case.go:380: Internal error occurred: failed calling webhook "mheat.kb.io": failed to call webhook: Post "https://heat-operator-controller-manager-service.openstack-operators.svc:443/mutate-heat-openstack-org-v1beta1-heat?timeout=10s": dial tcp 10.128.0.75:9443: connect: connection refused
    case.go:380: Internal error occurred: failed calling webhook "mheat.kb.io": failed to call webhook: Post "https://heat-operator-controller-manager-service.openstack-operators.svc:443/mutate-heat-openstack-org-v1beta1-heat?timeout=10s": dial tcp 10.128.0.75:9443: connect: connection refused

@bshephar
Copy link
Contributor Author

bshephar commented Nov 1, 2024

/test heat-operator-build-deploy-kuttl

@bshephar bshephar force-pushed the disable-webhooks branch 3 times, most recently from d0d9edc to b0c5b47 Compare November 2, 2024 10:51
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/cc47398324ea43fc8a7863f7cafc5c82

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 26m 45s
heat-operator-tempest-multinode FAILURE in 1h 09m 56s

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/9ac36f3b6ba341e6abf8a2fc21672ca7

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 25m 40s
heat-operator-tempest-multinode FAILURE in 1h 10m 17s

@bshephar
Copy link
Contributor Author

We need to merge this one to proceed any further here:
openshift/release#58391

@bshephar
Copy link
Contributor Author

/test heat-operator-build-deploy-kuttl

@bshephar bshephar force-pushed the disable-webhooks branch 2 times, most recently from e9c55e2 to cff9550 Compare November 12, 2024 06:33
@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/589eadecd7fc49e2b4cd73a423f7c1f9

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 25m 16s
heat-operator-tempest-multinode FAILURE in 1h 09m 45s

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/efff7273a75442b0941b043ecbe08afb

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 49m 58s
heat-operator-tempest-multinode FAILURE in 1h 33m 40s

@softwarefactory-project-zuul
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/1dfbed4e2db6453e9a382f5526c704d4

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 54m 15s
heat-operator-tempest-multinode FAILURE in 1h 34m 58s

@bshephar
Copy link
Contributor Author

So, the build is enabling webhooks:

❯ curl -s "https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/pr-logs/pull/openstack-k8s-operators_heat-operator/466/pull-ci-openstack-k8s-operators-heat-operator-main-heat-operator-build-deploy-kuttl/1859192010674540544/artifacts/heat-operator-build-deploy-kuttl/openstack-k8s-operators-build/build-log.txt" | grep ENABLE_WEB
[2/2] STEP 10/21: ARG ENABLE_WEBHOOKS=false
[2/2] STEP 18/21: ENV ENABLE_WEBHOOKS="${ENABLE_WEBHOOKS}"
+ oc patch bc/heat-operator-bundle --type=json '-p=[{"op": "add", "path": "/spec/strategy/dockerStrategy/env", "value": [{"name": "ENABLE_WEBHOOKS", "value": "true"}]}]'
STEP 2/17: ENV "ENABLE_WEBHOOKS"="true"
+ oc patch bc/openstack-operator-bundle --type=json '-p=[{"op": "add", "path": "/spec/strategy/dockerStrategy/env", "value": [{"name": "ENABLE_WEBHOOKS", "value": "true"}]}]'
STEP 2/17: ENV "ENABLE_WEBHOOKS"="true"

But Kuttl is failing because the webhook isn't responding:

❯ curl -s "https://storage.googleapis.com/test-platform-results/pr-logs/pull/openstack-k8s-operators_heat-operator/466/pull-ci-openstack-k8s-operators-heat-operator-main-heat-operator-build-deploy-kuttl/1859192010674540544/build-log.txt" | grep -E 'vheat|mheat'
    case.go:380: Internal error occurred: failed calling webhook "mheat.kb.io": failed to call webhook: Post "https://heat-operator-controller-manager-service.openstack-operators.svc:443/mutate-heat-openstack-org-v1beta1-heat?timeout=10s": dial tcp 10.128.0.98:9443: connect: connection refused
    case.go:380: Internal error occurred: failed calling webhook "mheat.kb.io": failed to call webhook: Post "https://heat-operator-controller-manager-service.openstack-operators.svc:443/mutate-heat-openstack-org-v1beta1-heat?timeout=10s": dial tcp 10.128.0.98:9443: connect: connection refused
    case.go:380: Internal error occurred: failed calling webhook "mheat.kb.io": failed to call webhook: Post "https://heat-operator-controller-manager-service.openstack-operators.svc:443/mutate-heat-openstack-org-v1beta1-heat?timeout=10s": dial tcp 10.128.0.98:9443: connect: connection refused

I must be missing something else.

@bshephar
Copy link
Contributor Author

... Yeah, ARG doesn't work like that.

This change builds the Operator bundle with ENABLE_WEBHOOKS=false.
This is due to the duplication of webhook logic since openstack-operator
imports and runs this webhook at the OpenStackControlPlane level. Therefor,
having the webhook run by the service operator is a needless duplication
of this webhook logic.

Jira: https://issues.redhat.com/browse/OSPRH-11198
Signed-off-by: Brendan Shephard <bshephar@redhat.com>
bshephar added a commit to bshephar/openstack-k8s-operators-ci that referenced this pull request Nov 22, 2024
This change adds the environment variable required to disable
operator webhooks in the build. This will only disable webhooks in operators
that have the option to do so in their specific Dockerfile. For example:
openstack-k8s-operators/heat-operator#466

Signed-off-by: Brendan Shephard <bshephar@redhat.com>
@bshephar
Copy link
Contributor Author

Yeah, so I guess we need to disable it like this:
https://github.com/openstack-k8s-operators/openstack-k8s-operators-ci/pull/114/files

@dprince
Copy link
Contributor

dprince commented Nov 22, 2024

@bshephar So my PR here would disable the validation/mutating webhooks for operators here: openstack-k8s-operators/openstack-operator#1185

@bshephar
Copy link
Contributor Author

@bshephar So my PR here would disable the validation/mutating webhooks for operators here: openstack-k8s-operators/openstack-operator#1185

We wanted to run them from the openstack-operator though didn't we? Then disable them in each individual service operator.

@bshephar
Copy link
Contributor Author

/retest

@bshephar
Copy link
Contributor Author

bshephar commented Dec 9, 2024

This doesn't need to be done on an individual operator level. We will change the architecture for deploying openstack-operator and in doing so, we will stop running service webhooks.

@bshephar bshephar closed this Dec 9, 2024
@bshephar bshephar deleted the disable-webhooks branch December 9, 2024 05:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants