Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cephfs static pv, rootPath is ignored by cephfs csi #1119

Closed
md-shabbir opened this issue May 29, 2020 · 6 comments
Closed

cephfs static pv, rootPath is ignored by cephfs csi #1119

md-shabbir opened this issue May 29, 2020 · 6 comments
Labels
component/cephfs Issues related to CephFS

Comments

@md-shabbir
Copy link

md-shabbir commented May 29, 2020

I followed this doc to create static cephfs pv.
I have created following pv and pvc:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: test
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 1Gi
  csi:
    driver: rook-ceph.cephfs.csi.ceph.com
    fsType: ext4
    nodeStageSecretRef:
      name: rook-csi-cephfs-node
      namespace: rook-ceph
    volumeAttributes:
      "clusterID": "rook-ceph"
      "fsName": "myfs"
      "staticVolume": "true"
      "rootPath": /volumes/testGroup/testSubVolume
    volumeHandle: cephfs-static-pv
  volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  # volumeName tying this PCV to a specific PV
  volumeName: test

Then I created a pod to consume the volume. When I list mounts then I could see the expected path was not mounted.

root@test-pre-prov:/# mount -l
10.233.17.154:6789,10.233.36.190:6789,10.233.7.23:6789:/volumes/csi/csi-vol-334530b7-a0e5-11ea-856f-0a580ae94188 on /mnt/data type ceph (rw,relatime,name=csi-cephfs-node,secret=<hidden>,acl,mds_namespace=myfs)

I was expecting the path /volumes/testGroup/testSubVolume has to be mounted.

Note: I am using rook-ceph.cephfs.csi.ceph.com csi driver, however in doc it is cephfs.csi.ceph.com.

Environment details

  • Image/version of Ceph CSI driver : quay.io/cephcsi/cephcsi:v1.2.2
  • Helm chart version :
  • Kernel version : 5.3.0-1020-gcp
  • Mounter used for mounting PVC (for cephfs its fuse or kernel. for rbd its
    krbd or rbd-nbd) :
  • Kubernetes cluster version : v1.15.3
  • Ceph cluster version : v1.2.7

Ceph plugin logs attached
cephplugin.log

@Madhu-1
Copy link
Collaborator

Madhu-1 commented May 29, 2020

@agarwal-mudit can you please take a look at it?

@agarwal-mudit
Copy link
Contributor

Checking, @md-shabbir need some information about the setup:

  1. If I am not wrong, you are unable to see the mount but pod is running successfully with static pvc?
  2. What is the history of the setup? Current mount shows that one dynamically provisioned volume is attached to the pod, is it the same pod where static one was also tried?
  3. Please share pod yaml also.
  4. Did you try accessing the mount path in the pod?

Not sure why it is not showing, cephfs plugin logs show that it was mounted properly:

I0529 09:58:45.801615       1 utils.go:120] ID: 517 GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/test/globalmount","target_path":"/var/lib/kubelet/pods/03c9124c-5b77-4012-8ae7-58b9dc0f4add/volumes/kubernetes.io~csi/test/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":5}},"volume_context":{"clusterID":"rook-ceph","fsName":"myfs","rootPath":"/volumes/testGroup/testSubVolume","staticVolume":"true"},"volume_id":"cephfs-static-pv"}
I0529 09:58:45.805971       1 mount_linux.go:170] Cannot run systemd-run, assuming non-systemd OS
I0529 09:58:45.806000       1 mount_linux.go:171] systemd-run failed with: exit status 1
I0529 09:58:45.806012       1 mount_linux.go:172] systemd-run output: Failed to create bus connection: No such file or directory
I0529 09:58:45.806071       1 util.go:48] ID: 517 cephfs: EXEC mount [-o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/test/globalmount /var/lib/kubelet/pods/03c9124c-5b77-4012-8ae7-58b9dc0f4add/volumes/kubernetes.io~csi/test/mount]
W0529 09:58:45.807836       1 nodeserver.go:230] ID: 517 mount-cache: failed to publish volume cephfs-static-pv /var/lib/kubelet/pods/03c9124c-5b77-4012-8ae7-58b9dc0f4add/volumes/kubernetes.io~csi/test/mount: mount-cache: node publish volume failed to find cache entry for volume
I0529 09:58:45.807864       1 nodeserver.go:233] ID: 517 cephfs: successfully bind-mounted volume cephfs-static-pv to /var/lib/kubelet/pods/03c9124c-5b77-4012-8ae7-58b9dc0f4add/volumes/kubernetes.io~csi/test/mount

@agarwal-mudit
Copy link
Contributor

Its working for me:

root@csicephfs-preprov-demo-pod:/# mount -l
ceph-fuse on /mudit type fuse.ceph-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)

Have attached pvc/pv/pod definition, only difference is that I am using rook-ceph secret but that has nothing to do with mount failure.

And my cephfs plugin has similar logs as I have pasted earlier:

I0530 04:14:43.894807       1 util.go:48] ID: 2604 Req-ID: preprov-pv-cephfs-01 cephfs: EXEC mount [-o bind,_netdev /var/lib/kubelet/plugins/kuber
netes.io/csi/pv/preprov-pv-cephfs-01/globalmount /var/lib/kubelet/pods/8684edfd-ca91-4932-ae7c-4115a032da6b/volumes/kubernetes.io~csi/preprov-pv-c
ephfs-01/mount]
I0530 04:14:44.048884       1 nodeserver.go:212] ID: 2604 Req-ID: preprov-pv-cephfs-01 cephfs: successfully bind-mounted volume preprov-pv-cephfs-01 to /var/lib/kubelet/pods/8684edfd-ca91-4932-ae7c-4115a032da6b/volumes/kubernetes.io~csi/preprov-pv-cephfs-01/mount

static-pv.txt

@nixpanic nixpanic added the component/cephfs Issues related to CephFS label Jun 2, 2020
@md-shabbir
Copy link
Author

Yes @agarwal-mudit I was trying dynamic and static both on the same setup.

I destroyed the cluster and brought back a new one. Now I am not able to reproduce the issue.

However, I got another issue with static PV, now mount is failing.
Describe of pod is showing following events:

  Type     Reason                  Age                From                     Message
  ----     ------                  ----               ----                     -------
  Warning  FailedScheduling        39s (x2 over 39s)  default-scheduler        persistentvolumeclaim "test" not found
  Warning  FailedScheduling        28s (x2 over 38s)  default-scheduler        pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
  Normal   Scheduled               24s                default-scheduler        Successfully assigned default/test-pre-prov to sbr-2
  Normal   SuccessfulAttachVolume  24s                attachdetach-controller  AttachVolume.Attach succeeded for volume "test"
  Warning  FailedMount             7s (x5 over 16s)   kubelet, sbr-2           MountVolume.MountDevice failed for volume "test" : rpc error: code = Internal desc = failed to get user credentials from node stage secrets: missing ID field 'userID' in secrets

I edited the secret rook-csi-cephfs-node(this secret is provided in PV's nodeStageSecretRef field) as following:
Original:

apiVersion: v1
data:
  adminID: Y3NpLWNlcGhmcy1ub2Rl
  adminKey: QVFCTHZkaGVDT25PRmhBQTQrRzdEUWwzdzB0UERaVjFrMDhBdHc9PQ==
kind: Secret

Edited:

apiVersion: v1
data:
  adminID: Y3NpLWNlcGhmcy1ub2Rl
  adminKey: QVFCTHZkaGVDT25PRmhBQTQrRzdEUWwzdzB0UERaVjFrMDhBdHc9PQ==
  userID: Y3NpLWNlcGhmcy1ub2Rl
  userKey: QVFCTHZkaGVDT25PRmhBQTQrRzdEUWwzdzB0UERaVjFrMDhBdHc9PQ==
kind: Secret

After edit, it worked. I also tried with new secret, created with admin creds, and It worked too.

So, the issue missing ID field 'userID' in secrets is a known issue or I am the one who is facing this?

@agarwal-mudit
Copy link
Contributor

That's a known thing, static CephFS PV works with userID not adminID, ite mentioned here

However, you can create your own secret with userID/userKey and reference it in PV definition or edit rook-csi-cephfs-node (as you mentioned in last post) and use that.

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Sep 7, 2020

closing this one as it's not an issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/cephfs Issues related to CephFS
Projects
None yet
Development

No branches or pull requests

4 participants