Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
bf44d47
add dedicated service account to crb, cvo and version pod
ehearne-redhat Nov 28, 2025
29bce17
add new line to keep linter happy
ehearne-redhat Nov 28, 2025
2982ece
add sa to testrendermanifest
ehearne-redhat Nov 28, 2025
ca25977
add dedicated sa for update payload
ehearne-redhat Nov 28, 2025
825ac61
add cluster-admin clusterrole
ehearne-redhat Nov 29, 2025
446f19b
remove cluster-admin role from file
ehearne-redhat Nov 30, 2025
bb1602f
add new cluster role with watch feature gate
ehearne-redhat Dec 1, 2025
d7e4cd0
re-add cluster-admin
ehearne-redhat Dec 1, 2025
5641681
rename roles to ensure service account is added first
ehearne-redhat Dec 3, 2025
5fe9198
rename cvo-dedicated-sa to cluster-version-operator
ehearne-redhat Dec 3, 2025
6310c45
add default sa crb back to test into and out of change test failures
ehearne-redhat Dec 4, 2025
b39df2b
add back featuregate read role + remove default crb
ehearne-redhat Dec 4, 2025
7d69c50
add new reader permissions
ehearne-redhat Dec 5, 2025
0c875cc
add leases role and role binding
ehearne-redhat Dec 5, 2025
7b05705
move roles back to 02 but ZZ to ensure applied last step of 02
ehearne-redhat Dec 11, 2025
31c2b46
add annotations to ensure inclusion
ehearne-redhat Dec 12, 2025
9002190
add scc privilege to cluster-version-operator service account
ehearne-redhat Dec 12, 2025
38530cd
add missing annotations
ehearne-redhat Dec 15, 2025
a575e1c
allow cluster-version-operator service account to use hostaccess scc
ehearne-redhat Dec 16, 2025
fc55fa5
simplify role bindings to resolve scc test failure
ehearne-redhat Dec 17, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions install/0000_00_cluster-version-operator_02_ZZ_roles.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-version-operator
annotations:
include.release.openshift.io/self-managed-high-availability: "true"
subjects:
- kind: ServiceAccount
name: cluster-version-operator
namespace: openshift-cluster-version
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-version-operator-payload
annotations:
include.release.openshift.io/self-managed-high-availability: "true"
subjects:
- kind: ServiceAccount
name: update-payload-dedicated-sa
namespace: openshift-cluster-version
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-version-operator
annotations:
kubernetes.io/description: Grant the cluster-version operator permission to perform cluster-admin actions while managing the OpenShift core.
include.release.openshift.io/self-managed-high-availability: "true"
roleRef:
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
namespace: openshift-cluster-version
name: default
14 changes: 0 additions & 14 deletions install/0000_00_cluster-version-operator_02_roles.yaml

This file was deleted.

17 changes: 17 additions & 0 deletions install/0000_00_cluster-version-operator_02_service_account.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: cluster-version-operator
namespace: openshift-cluster-version
annotations:
kubernetes.io/description: Dedicated Service Account for the Cluster Version Operator.
include.release.openshift.io/self-managed-high-availability: "true"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't looked into HyperShift, but I expect we'll need a ServiceAccount in the hosted-Kube-API there too?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can look further into this and get back to you. :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I can't get my changes to work it might not be a bad idea to replicate their own setup. Thanks for pointing me in the right direction! :)

---
apiVersion: v1
kind: ServiceAccount
metadata:
name: update-payload-dedicated-sa
namespace: openshift-cluster-version
annotations:
kubernetes.io/description: Dedicated Service Account for the Update Payload.
include.release.openshift.io/self-managed-high-availability: "true"
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ spec:
k8s-app: cluster-version-operator
spec:
automountServiceAccountToken: false
serviceAccountName: cluster-version-operator
containers:
- name: cluster-version-operator
image: '{{.ReleaseImage}}'
Expand Down
1 change: 1 addition & 0 deletions pkg/cvo/updatepayload.go
Original file line number Diff line number Diff line change
Expand Up @@ -232,6 +232,7 @@ func (r *payloadRetriever) fetchUpdatePayloadToDir(ctx context.Context, dir stri
},
},
Spec: corev1.PodSpec{
ServiceAccountName: "update-payload-dedicated-sa",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default service account should not be used on OpenShift components

I think "Don't attach privileges to the default service account in OpenShift namespaces" makes a lot of sense, but I'm less clear on the downsides of Pods using the default service account if that account comes with no privileges. This version Pod does not need any Kube API privileges. In fact, it doesn't need any network communication at all, it's just shoveling bits around between the local container filesystem and a volume-mounted host directory. Can we leave it running in the default service account, or is there a way to request no-service-account-at-all? Maybe that's effectively what automountServiceAccountToken: false does?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe this is what you're talking about in this thread, and some layer is using the system:openshift:scc:privileged ClusterRole to decide if a Pod is allowed to have the privileged Security Context Constraint? Not clear to me how that would be enforced though. Are there docs on the system:openshift:scc:* ClusterRoles walking through this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Poking around in docs turned up these sections dancing in this space, but I'm still not clear if system:openshift:scc:* ClusterRole access is checked on something in the Pod-creation path, or checked against the ServiceAccount that is about to be bound to the Pod being created. I also turned up a directory with release-image manifests for many of these ClusterRoles, but no README.md or other docs there explaining who is expected to use them how, or exactly how the guard that enforces them works.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wking sorry for the late response - service accounts need to be attached to a pod, whether we specify one or not. If we specified no service account, the default one would be used in its place.

Thanks for letting me know about minimum permissions here. We can create a service account without specifying permissions. This way, we don't elevate permissions but we also don't allow default service account usage. I can see if this is all that is required for this deployment.

Thanks for looking into further docs on this topic. We can test without to see if the pod genuinely requires elevated permissions or not.

ActiveDeadlineSeconds: deadline,
InitContainers: []corev1.Container{
setContainerDefaults(corev1.Container{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ spec:
k8s-app: cluster-version-operator
spec:
automountServiceAccountToken: false
serviceAccountName: cluster-version-operator
containers:
- name: cluster-version-operator
image: 'quay.io/cvo/release:latest'
Expand Down