Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Use custom SCC for topolvm-node #35

Merged
merged 1 commit into from
Dec 21, 2021

Conversation

jmolmo
Copy link
Contributor

@jmolmo jmolmo commented Dec 20, 2021

Replaced the SCC with the custom topolvm-node SCC

Signed-off-by: Juan Miguel Olmo Martínez [email protected]

- 'configMap'
- 'emptyDir'
- 'secret'
- 'hostPath'
Copy link
Contributor

@sp98 sp98 Dec 21, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(trying to understand how SCC works) How is this SCC associated with the topolvm-node Service account? There is a user filed that contains the list of serviceAccounts. For example- rook

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • thanks for finding this, I saw users field somewhere and forgot where that was during our meetings
  • in current scenario we are using ClusterRole & ClusterRoleBinding to use this SCC
  • in merged commit it was using privileged directly and this changes that to use topolvm-node named SCC

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sp98: I have include the different settings reading what they do .
As @leelavg commented we are using a cluster role binding to to associate the sa with the scc. Include the user in the scc seems to link the scc to a single namespace (because the way you need to specify the user), which i think is very limited for our needs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The operator only needs the permissions for the sa in the operator namespace as that is where it will create the containers. Can we try this with a Role?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nbalacha : We cannot use Role instead of Cluster Role, because we need permissions for cluster-scoped resources (like nodes).

Copy link
Contributor

@leelavg leelavg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

allowPrivilegeEscalation: true
allowPrivilegedContainer: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • not immediate but our final goal should be to use specific capabilities that the container needs and curtail everything else.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 21, 2021

@leelavg: changing LGTM is restricted to collaborators

In response to this:

/lgtm

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Copy link

@leseb leseb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally, it's good to comment on why we require permission when we hope to remove it (if ever possible).

allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: false
allowHostPID: true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That does not tell why we need HostPID.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@leseb generally nodeplugin doesn't need it however lvmd in the daemonset needs this.

It uses nsenter to get into host pid nasemspace and run lvm ops

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need HostPID for nsenter to run any commands on the host. As far as I can tell, Rook uses nsenter already to exec command on the host and we don't use HostPID.

However, if you really feel like we do then let's comment on why. Thanks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extract from Openshift documentation:

allowHostPID | boolean | AllowHostPID determines if the policy allows host pid in the containers.
-- | -- | --

If you are asking why do we need hostpid: true in the pod, this is because lvmd needs to see the pid:1 on the host. (here you can follow the complete discussion that drove to this point)

But, for me is new this world of SCCs.. how do you suggest to add this requirement (see pid 1 in the host) in the SCC?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't mean to rehearse or argue on the past decision that led to enabling HostPID. I just want to make sure everybody knows and understands why it is needed. That's all :) So in the config/rbac/topolvm_node_scc.yaml I'm just suggesting adding more comments to explain why each option we enable is needed.

- 'configMap'
- 'emptyDir'
- 'secret'
- 'hostPath'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The operator only needs the permissions for the sa in the operator namespace as that is where it will create the containers. Can we try this with a Role?

Replaced the <privileged> SCC with the custom topolvm-node SCC

Signed-off-by: Juan Miguel Olmo Martínez <[email protected]>
@jmolmo
Copy link
Contributor Author

jmolmo commented Dec 21, 2021

I have extracted the topolvm-node scc from the cluster role, and created a specific cluster role and binding for the scc. In this way deployment can be more flexible. I guess that we also will need in a near future to add PSPs, :-)

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Dec 21, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 21, 2021

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: jmolmo, leelavg, nbalacha, sp98

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@nbalacha nbalacha merged commit 7b5eb34 into openshift:main Dec 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants