-
Notifications
You must be signed in to change notification settings - Fork 559
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rbd: implement CSI-Addons ReclaimSpace operations #2724
Conversation
ca763d0
to
e68e12b
Compare
e68e12b
to
8104daa
Compare
8104daa
to
42020f2
Compare
42020f2
to
4475a99
Compare
} | ||
|
||
// NodeReclaimSpace runs fstrim or blkdiscard on the path where the volume is | ||
// mounted or attached. When a volume with multi-node permissions is detected, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just to understand this better, if RWX cap is detected, isnt it ? and not applicable for ROX
eventhough both are from the multinode
world.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
read-only volumes should not get this request at all. @Rakshith-R @yati1998 is there a check (planned) for that in the k8s-controller?
} | ||
|
||
// path can either be the staging path on the node, or the volume path | ||
// inside an application container |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually iow, it can be a staging path or publish /target path in CSI terms.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nixpanic did you plan to adjust the path definition here too as publish/target path
instead of path inside the application container?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
requesting changes to avoid accidental merging. once we get confirmation that manual testing is working fine. will approve it.
also please check the review comments posted a couple of days back on this PR .. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please check the review comments.
Manually tested with an ugly hack. Now also with minimal testing tool (WIP) at https://github.com/nixpanic/ceph-csi/tree/testing/csi-addons/reclaim-space/tools/csi-addons |
@humblec please check my replies. Let me know if there is anything you are concerned about or do not accept. Thanks! Will update the PR once you replied. |
4475a99
to
f537001
Compare
Pull request has been modified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm!
I tried to address all your questions related to the docs, and fixed the typos.
f537001
to
54bb4d9
Compare
Pull request has been modified.
} | ||
} else { | ||
// append the right staging location used by this CSI-driver | ||
path = fmt.Sprintf("%s/%s", path, volumeID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The globalmount
component was included, but it seems that is something CO dependent. Kubernetes includes that in the NodeStageVolume
CSI request, and so should any caller of NodeReclaimSpace
.
I1223 12:01:43.532455 375675 utils.go:191] ID: 45 Req-ID: 0001-0011-openshift-storage-0000000000000001-96e9a7ce-63c8-11ec-89d0-0a580a83001c GRPC call: /csi.v1.Node/NodeStageVolume
I1223 12:01:43.532652 375675 utils.go:195] ID: 45 Req-ID: 0001-0011-openshift-storage-0000000000000001-96e9a7ce-63c8-11ec-89d0-0a580a83001c GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-720d76c5-d994-4eb8-b8c0-d2bc5f493c17/globalmount
diff --git a/internal/csi-addons/rbd/reclaimspace.go b/internal/csi-addons/rbd/reclaimspace.go
index 6898fb522..ab6c56c13 100644
--- a/internal/csi-addons/rbd/reclaimspace.go
+++ b/internal/csi-addons/rbd/reclaimspace.go
@@ -118,7 +118,7 @@ func (rsns *ReclaimSpaceNodeServer) NodeReclaimSpace(
}
} else {
// append the right staging location used by this CSI-driver
- path = fmt.Sprintf("%s/globalmount/%s", path, volumeID)
+ path = fmt.Sprintf("%s/%s", path, volumeID)
}
// do not allow RWX block-mode volumes, danger of data corruption
Signed-off-by: Niels de Vos <[email protected]>
The CSI Controller (provisioner) can call `rbd sparsify` to reduce the space consumption of the volume. Signed-off-by: Niels de Vos <[email protected]>
By calling fstrim/blkdiscard on the volume, space consumption should get reduced. Signed-off-by: Niels de Vos <[email protected]>
Signed-off-by: Niels de Vos <[email protected]>
Signed-off-by: Niels de Vos <[email protected]>
Signed-off-by: Niels de Vos <[email protected]>
54bb4d9
to
a257be7
Compare
@nixpanic I am fine with the current version, only the multinode clarification is pending on Rakshit and Yati. but that should not be a blocker here. |
This adds an initial implementation for the
ControllerReclaimSpace
andNodeReclaimSpace
CSI-Addons operations.There is no csi-addons/kubernetes-csi-addons controller or sidecar available yet, so e2e testing will need to be added later.
Updates: #2672
Closes: #2710
Show available bot commands
These commands are normally not required, but in case of issues, leave any of
the following bot commands in an otherwise empty comment in this PR:
/retest ci/centos/<job-name>
: retest the<job-name>
after unrelatedfailure (please report the failure too!)
/retest all
: run this in case the CentOS CI failed to start/report any testprogress or results