Skip to content

Conversation

@butler54
Copy link
Collaborator

  • feat: allow the use of self-signed certificates with trustee
  • chore: ansible linting
  • chore: ansible docs
  • fix: linting
  • feat: add multicluster support
  • fix: update global pattern
  • fix: add cert manager operator back in
  • fix: correct hub-to-spoke

@butler54 butler54 changed the title generalize secrets feat: multicluster support Sep 17, 2025
@butler54
Copy link
Collaborator Author

#55 needs to be merged first then this needs to be rebased

@sabre1041
Copy link
Collaborator

@butler54 Deployed the pattern. Some comments based on my deployment

  • Two clusters (hub and spoke) deployed successfully
  • Spoke is very vanilla without any content deployed
  • ACM deployed to the hub. But, the spoke was not added as a managed cluster
  • Hub has two argo instances deployed. Spoke has no argo instances

@butler54
Copy link
Collaborator Author

butler54 commented Oct 7, 2025

@butler54 Deployed the pattern. Some comments based on my deployment

  • Two clusters (hub and spoke) deployed successfully
  • Spoke is very vanilla without any content deployed
  • ACM deployed to the hub. But, the spoke was not added as a managed cluster
  • Hub has two argo instances deployed. Spoke has no argo instances

Okay so this is my fault - looks like we have two paths:

  1. Update README (required anyway)
  2. Update the wrapper-multicluster.sh script to onboard the cluster to the hub cluster.

I'll take that on as it's a requirement.

Copy link
Collaborator

@beraldoleal beraldoleal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @butler54, I'm finally here.

Heads up, I tend to review commit-by-commit (old upstream habits die hard), so the large initial commit followed by chore commits was a bit annoying to navigate. Please, take it as a nit, feel free to ignore for this PR. Just thinking ahead to the helm repo split where smaller commits will help with more contributors.

Signed-off-by: Chris Butler <chris.butler@redhat.com>
@butler54
Copy link
Collaborator Author

Should be looking good now :)

@butler54 butler54 requested a review from beraldoleal October 23, 2025 13:01
@sabre1041
Copy link
Collaborator

@butler54 Running into challenges with multiple runs

INFO Waiting up to 20m0s (until 3:20AM CDT) for the Kubernetes API at https://api.coco-hub.dkwdc.azure.redhatworkshops.io:6443... 
ERROR Attempted to gather ClusterOperator status after installation failure: listing ClusterOperator objects: Get "https://api.coco-hub.dkwdc.azure.redhatworkshops.io:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.coco-hub.dkwdc.azure.redhatworkshops.io: no such host 
ERROR Bootstrap failed to complete: Get "https://api.coco-hub.dkwdc.azure.redhatworkshops.io:6443/version": dial tcp: lookup api.coco-hub.dkwdc.azure.redhatworkshops.io: no such host 
ERROR Failed waiting for Kubernetes API. This error usually happens when there is a problem on the bootstrap host that prevents creating a temporary control plane. 

@sabre1041
Copy link
Collaborator

Two clusters were provisioned, but this note is presented before exiting

---------------------
Verifying ACM deployment on hub cluster
---------------------
WARNING: ACM namespace 'open-cluster-management' not found

@butler54
Copy link
Collaborator Author

butler54 commented Nov 6, 2025

Let me take another crack at this. I think we might need to bring Mak in if we are getting long term failures in deployment ias it's affecting us anyway.

@makentenza
Copy link

@butler54 let me give it a try and see if I can help figure out what's going on

@butler54
Copy link
Collaborator Author

butler54 commented Nov 9, 2025

@butler54 let me give it a try and see if I can help figure out what's going on

Ran into a similar error again. Not sure what determines it.

Copy link
Collaborator

@sabre1041 sabre1041 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@butler54 Finally got a cluster deployed. There was an error that was being reported, but was not halting execution and the error message was buried in the output.

Added a few comments where we might want to add some error checking for easier visibility of failures

ADDON_WAIT=0
while [ $ADDON_WAIT -lt 180 ]; do
ADDONS_READY=$(kubectl get managedclusteraddons -n coco-spoke -o jsonpath='{range .items[?(@.spec.installNamespace=="open-cluster-management-agent-addon")]}{.metadata.name}={.status.conditions[?(@.type=="Available")].status}{"\n"}{end}' 2>/dev/null | grep -c "=True" || echo "0")
if [ "$ADDONS_READY" -ge 4 ]; then
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An error was omitted from the script here

./rhdp/wrapper-multicluster.sh: line 351: [: 0
0: integer expression expected
Addon status: 0

Processing still continued

Copy link
Collaborator

@beraldoleal beraldoleal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A part from others comments, LGTM.

@beraldoleal
Copy link
Collaborator

beraldoleal commented Nov 24, 2025

@butler54 do you plan to fix those comments still on this PR or on a following PR? Could we merge it as it is?

Copy link
Collaborator

@beraldoleal beraldoleal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@butler54 , I found a minor issue with the ACM channel.

@butler54 butler54 merged commit f8b6b25 into validatedpatterns:main Nov 26, 2025
4 checks passed
@butler54 butler54 deleted the generalize-secrets branch November 26, 2025 01:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants