Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

charts: add allowPrivilegeEscalation: true to containerSecurityContext to nodeplugin daemonset (backport #2993) #3065

Merged
merged 2 commits into from
Apr 26, 2022

Conversation

mergify[bot]
Copy link
Contributor

@mergify mergify bot commented Apr 26, 2022

This is an automatic backport of pull request #2993 done by Mergify.

NOte:- Done manual rebasing

When running the kubernetes cluster with one single privileged
PodSecurityPolicy which is allowing everything the nodeplugin
daemonset can fail to start. To be precise the problem is the
defaultAllowPrivilegeEscalation: false configuration in the PSP.
 Containers of the nodeplugin daemonset won't start when they
have privileged: true but no allowPrivilegeEscalation in their
container securityContext.

Kubernetes will not schedule if this mismatch exists cannot set
allowPrivilegeEscalation to false and privileged to true

Signed-off-by: Silvan Loser <[email protected]>
Signed-off-by: Silvan Loser <[email protected]>
(cherry picked from commit 06c4477)
@mergify mergify bot added the conflicts label Apr 26, 2022
@Madhu-1 Madhu-1 force-pushed the mergify/bp/release-v3.6/pr-2993 branch from f72e1d1 to 73567ea Compare April 26, 2022 05:23
When running the kubernetes cluster with one single privileged
PodSecurityPolicy which is allowing everything the nodeplugin
daemonset can fail to start. To be precise the problem is the
defaultAllowPrivilegeEscalation: false configuration in the PSP.
 Containers of the nodeplugin daemonset won't start when they
have privileged: true but no allowPrivilegeEscalation in their
container securityContext.

Kubernetes will not schedule if this mismatch exists cannot set
allowPrivilegeEscalation to false and privileged to true:

Signed-off-by: Silvan Loser <[email protected]>
Signed-off-by: Silvan Loser <[email protected]>
(cherry picked from commit f2e0fa2)
@Madhu-1 Madhu-1 force-pushed the mergify/bp/release-v3.6/pr-2993 branch from 73567ea to 10c8f2e Compare April 26, 2022 05:24
@Madhu-1 Madhu-1 requested a review from a team April 26, 2022 05:25
@Madhu-1 Madhu-1 added ci/retry/e2e Label to retry e2e retesting on approved PR's and removed conflicts labels Apr 26, 2022
@ceph-csi-bot
Copy link
Collaborator

/retest ci/centos/mini-e2e-helm/k8s-1.22

@ceph-csi-bot
Copy link
Collaborator

@mergify[bot] "ci/centos/mini-e2e-helm/k8s-1.22" test failed. Logs are available at location for debugging

@ceph-csi-bot
Copy link
Collaborator

@Mergifyio requeue

@mergify
Copy link
Contributor Author

mergify bot commented Apr 26, 2022

requeue

❌ This pull request head commit has not been previously disembarked from queue.

@mergify mergify bot merged commit b50d859 into release-v3.6 Apr 26, 2022
@mergify mergify bot deleted the mergify/bp/release-v3.6/pr-2993 branch April 26, 2022 10:02
@Madhu-1 Madhu-1 mentioned this pull request Jun 9, 2022
11 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/retry/e2e Label to retry e2e retesting on approved PR's
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants